Sample records for sequences simulation results

  1. Studies and simulations of the DigiCipher system

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Kipp, G.

    1993-01-01

    During this period the development of simulators for the various high definition television (HDTV) systems proposed to the FCC was continued. The FCC has indicated that it wants the various proposers to collaborate on a single system. Based on all available information this system will look very much like the advanced digital television (ADTV) system with major contributions only from the DigiCipher system. The results of our simulations of the DigiCipher system are described. This simulator was tested using test sequences from the MPEG committee. The results are extrapolated to HDTV video sequences. Once again, some caveats are in order. The sequences used for testing the simulator and generating the results are those used for testing the MPEG algorithm. The sequences are of much lower resolution than the HDTV sequences would be, and therefore the extrapolations are not totally accurate. One would expect to get significantly higher compression in terms of bits per pixel with sequences that are of higher resolution. However, the simulator itself is a valid one, and should HDTV sequences become available, they could be used directly with the simulator. A brief overview of the DigiCipher system is given. Some coding results obtained using the simulator are looked at. These results are compared to those obtained using the ADTV system. These results are evaluated in the context of the CCSDS specifications and make some suggestions as to how the DigiCipher system could be implemented in the NASA network. Simulations such as the ones reported can be biased depending on the particular source sequence used. In order to get more complete information about the system one needs to obtain a reasonable set of models which mirror the various kinds of sources encountered during video coding. A set of models which can be used to effectively model the various possible scenarios is provided. As this is somewhat tangential to the other work reported, the results are included as an appendix.

  2. EEG and ECG changes during simulator operation reflect mental workload and vigilance.

    PubMed

    Dussault, Caroline; Jouanin, Jean-Claude; Philippe, Matthieu; Guezennec, Charles-Yannick

    2005-04-01

    Performing mission tasks in a simulator influences many neurophysiological measures. Quantitative assessments of electroencephalography (EEG) and electrocardiography (ECG) have made it possible to develop indicators of mental workload and to estimate relative physiological responses to cognitive requirements. To evaluate the effects of mental workload without actual physical risk, we studied the cortical and cardiovascular changes that occurred during simulated flight. There were 12 pilots (8 novices and 4 experts) who simulated a flight composed of 10 sequences that induced several different mental workload levels. EEG was recorded at 12 electrode sites during rest and flight sequences; ECG activity was also recorded. Subjective tests were used to evaluate anxiety and vigilance levels. Theta band activity was lower during the two simulated flight rest sequences than during visual and instrument flight sequences at central, parietal, and occipital sites (p < 0.05). On the other hand, rest sequences resulted in higher beta (at the C4 site; p < 0.05) and gamma (at the central, parietal, and occipital sites; p < 0.05) power than active segments. The mean heart rate (HR) was not significantly different during any simulated flight sequence, but HR was lower for expert subjects than for novices. The subjective tests revealed no significant anxiety and high values for vigilance levels before and during flight. The different flight sequences performed on the simulator resulted in electrophysiological changes that expressed variations in mental workload. These results corroborate those found during study of real flights, particularly during sequences requiring the heaviest mental workload.

  3. Comparison of next generation sequencing technologies for transcriptome characterization

    PubMed Central

    2009-01-01

    Background We have developed a simulation approach to help determine the optimal mixture of sequencing methods for most complete and cost effective transcriptome sequencing. We compared simulation results for traditional capillary sequencing with "Next Generation" (NG) ultra high-throughput technologies. The simulation model was parameterized using mappings of 130,000 cDNA sequence reads to the Arabidopsis genome (NCBI Accession SRA008180.19). We also generated 454-GS20 sequences and de novo assemblies for the basal eudicot California poppy (Eschscholzia californica) and the magnoliid avocado (Persea americana) using a variety of methods for cDNA synthesis. Results The Arabidopsis reads tagged more than 15,000 genes, including new splice variants and extended UTR regions. Of the total 134,791 reads (13.8 MB), 119,518 (88.7%) mapped exactly to known exons, while 1,117 (0.8%) mapped to introns, 11,524 (8.6%) spanned annotated intron/exon boundaries, and 3,066 (2.3%) extended beyond the end of annotated UTRs. Sequence-based inference of relative gene expression levels correlated significantly with microarray data. As expected, NG sequencing of normalized libraries tagged more genes than non-normalized libraries, although non-normalized libraries yielded more full-length cDNA sequences. The Arabidopsis data were used to simulate additional rounds of NG and traditional EST sequencing, and various combinations of each. Our simulations suggest a combination of FLX and Solexa sequencing for optimal transcriptome coverage at modest cost. We have also developed ESTcalc http://fgp.huck.psu.edu/NG_Sims/ngsim.pl, an online webtool, which allows users to explore the results of this study by specifying individualized costs and sequencing characteristics. Conclusion NG sequencing technologies are a highly flexible set of platforms that can be scaled to suit different project goals. In terms of sequence coverage alone, the NG sequencing is a dramatic advance over capillary-based sequencing, but NG sequencing also presents significant challenges in assembly and sequence accuracy due to short read lengths, method-specific sequencing errors, and the absence of physical clones. These problems may be overcome by hybrid sequencing strategies using a mixture of sequencing methodologies, by new assemblers, and by sequencing more deeply. Sequencing and microarray outcomes from multiple experiments suggest that our simulator will be useful for guiding NG transcriptome sequencing projects in a wide range of organisms. PMID:19646272

  4. CAMELOT: A machine learning approach for coarse-grained simulations of aggregation of block-copolymeric protein sequences

    PubMed Central

    Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.

    2015-01-01

    We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong

    Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less

  6. Introduction to study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    1992-01-01

    During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.

  7. Sequence-dependent DNA deformability studied using molecular dynamics simulations.

    PubMed

    Fujii, Satoshi; Kono, Hidetoshi; Takenaka, Shigeori; Go, Nobuhiro; Sarai, Akinori

    2007-01-01

    Proteins recognize specific DNA sequences not only through direct contact between amino acids and bases, but also indirectly based on the sequence-dependent conformation and deformability of the DNA (indirect readout). We used molecular dynamics simulations to analyze the sequence-dependent DNA conformations of all 136 possible tetrameric sequences sandwiched between CGCG sequences. The deformability of dimeric steps obtained by the simulations is consistent with that by the crystal structures. The simulation results further showed that the conformation and deformability of the tetramers can highly depend on the flanking base pairs. The conformations of xATx tetramers show the most rigidity and are not affected by the flanking base pairs and the xYRx show by contrast the greatest flexibility and change their conformations depending on the base pairs at both ends, suggesting tetramers with the same central dimer can show different deformabilities. These results suggest that analysis of dimeric steps alone may overlook some conformational features of DNA and provide insight into the mechanism of indirect readout during protein-DNA recognition. Moreover, the sequence dependence of DNA conformation and deformability may be used to estimate the contribution of indirect readout to the specificity of protein-DNA recognition as well as nucleosome positioning and large-scale behavior of nucleic acids.

  8. Molecular Dynamics Simulations of the 136 Unique Tetranucleotide Sequences of DNA Oligonucleotides. I. Research Design and Results on d(CpG) Steps

    PubMed Central

    Beveridge, David L.; Barreiro, Gabriela; Byun, K. Suzie; Case, David A.; Cheatham, Thomas E.; Dixit, Surjit B.; Giudice, Emmanuel; Lankas, Filip; Lavery, Richard; Maddocks, John H.; Osman, Roman; Seibert, Eleanore; Sklenar, Heinz; Stoll, Gautier; Thayer, Kelly M.; Varnai, Péter; Young, Matthew A.

    2004-01-01

    We describe herein a computationally intensive project aimed at carrying out molecular dynamics (MD) simulations including water and counterions on B-DNA oligomers containing all 136 unique tetranucleotide base sequences. This initiative was undertaken by an international collaborative effort involving nine research groups, the “Ascona B-DNA Consortium” (ABC). Calculations were carried out on the 136 cases imbedded in 39 DNA oligomers with repeating tetranucleotide sequences, capped on both ends by GC pairs and each having a total length of 15 nucleotide pairs. All MD simulations were carried out using a well-defined protocol, the AMBER suite of programs, and the parm94 force field. Phase I of the ABC project involves a total of ∼0.6 μs of simulation for systems containing ∼24,000 atoms. The resulting trajectories involve 600,000 coordinate sets and represent ∼400 gigabytes of data. In this article, the research design, details of the simulation protocol, informatics issues, and the organization of the results into a web-accessible database are described. Preliminary results from 15-ns MD trajectories are presented for the d(CpG) step in its 10 unique sequence contexts, and issues of stability and convergence, the extent of quasiergodic problems, and the possibility of long-lived conformational substates are discussed. PMID:15326025

  9. Advances in Discrete-Event Simulation for MSL Command Validation

    NASA Technical Reports Server (NTRS)

    Patrikalakis, Alexander; O'Reilly, Taifun

    2013-01-01

    In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.

  10. Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2014-01-01

    Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.

  11. Assessing pooled BAC and whole genome shotgun strategies for assembly of complex genomes

    PubMed Central

    2011-01-01

    Background We investigate if pooling BAC clones and sequencing the pools can provide for more accurate assembly of genome sequences than the "whole genome shotgun" (WGS) approach. Furthermore, we quantify this accuracy increase. We compare the pooled BAC and WGS approaches using in silico simulations. Standard measures of assembly quality focus on assembly size and fragmentation, which are desirable for large whole genome assemblies. We propose additional measures enabling easy and visual comparison of assembly quality, such as rearrangements and redundant sequence content, relative to the known target sequence. Results The best assembly quality scores were obtained using 454 coverage of 15× linear and 5× paired (3kb insert size) reads (15L-5P) on Arabidopsis. This regime gave similarly good results on four additional plant genomes of very different GC and repeat contents. BAC pooling improved assembly scores over WGS assembly, coverage and redundancy scores improving the most. Conclusions BAC pooling works better than WGS, however, both require a physical map to order the scaffolds. Pool sizes up to 12Mbp work well, suggesting this pooling density to be effective in medium-scale re-sequencing applications such as targeted sequencing of QTL intervals for candidate gene discovery. Assuming the current Roche/454 Titanium sequencing limitations, a 12 Mbp region could be re-sequenced with a full plate of linear reads and a half plate of paired-end reads, yielding 15L-5P coverage after read pre-processing. Our simulation suggests that massively over-sequencing may not improve accuracy. Our scoring measures can be used generally to evaluate and compare results of simulated genome assemblies. PMID:21496274

  12. A systematic molecular dynamics study of nearest-neighbor effects on base pair and base pair step conformations and fluctuations in B-DNA

    PubMed Central

    Lavery, Richard; Zakrzewska, Krystyna; Beveridge, David; Bishop, Thomas C.; Case, David A.; Cheatham, Thomas; Dixit, Surjit; Jayaram, B.; Lankas, Filip; Laughton, Charles; Maddocks, John H.; Michon, Alexis; Osman, Roman; Orozco, Modesto; Perez, Alberto; Singh, Tanya; Spackova, Nada; Sponer, Jiri

    2010-01-01

    It is well recognized that base sequence exerts a significant influence on the properties of DNA and plays a significant role in protein–DNA interactions vital for cellular processes. Understanding and predicting base sequence effects requires an extensive structural and dynamic dataset which is currently unavailable from experiment. A consortium of laboratories was consequently formed to obtain this information using molecular simulations. This article describes results providing information not only on all 10 unique base pair steps, but also on all possible nearest-neighbor effects on these steps. These results are derived from simulations of 50–100 ns on 39 different DNA oligomers in explicit solvent and using a physiological salt concentration. We demonstrate that the simulations are converged in terms of helical and backbone parameters. The results show that nearest-neighbor effects on base pair steps are very significant, implying that dinucleotide models are insufficient for predicting sequence-dependent behavior. Flanking base sequences can notably lead to base pair step parameters in dynamic equilibrium between two conformational sub-states. Although this study only provides limited data on next-nearest-neighbor effects, we suggest that such effects should be analyzed before attempting to predict the sequence-dependent behavior of DNA. PMID:19850719

  13. BlochSolver: A GPU-optimized fast 3D MRI simulator for experimentally compatible pulse sequences

    NASA Astrophysics Data System (ADS)

    Kose, Ryoichi; Kose, Katsumi

    2017-08-01

    A magnetic resonance imaging (MRI) simulator, which reproduces MRI experiments using computers, has been developed using two graphic-processor-unit (GPU) boards (GTX 1080). The MRI simulator was developed to run according to pulse sequences used in experiments. Experiments and simulations were performed to demonstrate the usefulness of the MRI simulator for three types of pulse sequences, namely, three-dimensional (3D) gradient-echo, 3D radio-frequency spoiled gradient-echo, and gradient-echo multislice with practical matrix sizes. The results demonstrated that the calculation speed using two GPU boards was typically about 7 TFLOPS and about 14 times faster than the calculation speed using CPUs (two 18-core Xeons). We also found that MR images acquired by experiment could be reproduced using an appropriate number of subvoxels, and that 3D isotropic and two-dimensional multislice imaging experiments for practical matrix sizes could be simulated using the MRI simulator. Therefore, we concluded that such powerful MRI simulators are expected to become an indispensable tool for MRI research and development.

  14. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE PAGES

    Huang, Qiuhua; Vittal, Vijay

    2018-05-09

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  15. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Qiuhua; Vittal, Vijay

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  16. SimulaTE: simulating complex landscapes of transposable elements of populations.

    PubMed

    Kofler, Robert

    2018-04-15

    Estimating the abundance of transposable elements (TEs) in populations (or tissues) promises to answer many open research questions. However, progress is hampered by the lack of concordance between different approaches for TE identification and thus potentially unreliable results. To address this problem, we developed SimulaTE a tool that generates TE landscapes for populations using a newly developed domain specific language (DSL). The simple syntax of our DSL allows for easily building even complex TE landscapes that have, for example, nested, truncated and highly diverged TE insertions. Reads may be simulated for the populations using different sequencing technologies (PacBio, Illumina paired-ends) and strategies (sequencing individuals and pooled populations). The comparison between the expected (i.e. simulated) and the observed results will guide researchers in finding the most suitable approach for a particular research question. SimulaTE is implemented in Python and available at https://sourceforge.net/projects/simulates/. Manual https://sourceforge.net/p/simulates/wiki/Home/#manual; Test data and tutorials https://sourceforge.net/p/simulates/wiki/Home/#walkthrough; Validation https://sourceforge.net/p/simulates/wiki/Home/#validation. robert.kofler@vetmeduni.ac.at.

  17. Assessing pooled BAC and whole genome shotgun strategies for assembly of complex genomes.

    PubMed

    Haiminen, Niina; Feltus, F Alex; Parida, Laxmi

    2011-04-15

    We investigate if pooling BAC clones and sequencing the pools can provide for more accurate assembly of genome sequences than the "whole genome shotgun" (WGS) approach. Furthermore, we quantify this accuracy increase. We compare the pooled BAC and WGS approaches using in silico simulations. Standard measures of assembly quality focus on assembly size and fragmentation, which are desirable for large whole genome assemblies. We propose additional measures enabling easy and visual comparison of assembly quality, such as rearrangements and redundant sequence content, relative to the known target sequence. The best assembly quality scores were obtained using 454 coverage of 15× linear and 5× paired (3kb insert size) reads (15L-5P) on Arabidopsis. This regime gave similarly good results on four additional plant genomes of very different GC and repeat contents. BAC pooling improved assembly scores over WGS assembly, coverage and redundancy scores improving the most. BAC pooling works better than WGS, however, both require a physical map to order the scaffolds. Pool sizes up to 12Mbp work well, suggesting this pooling density to be effective in medium-scale re-sequencing applications such as targeted sequencing of QTL intervals for candidate gene discovery. Assuming the current Roche/454 Titanium sequencing limitations, a 12 Mbp region could be re-sequenced with a full plate of linear reads and a half plate of paired-end reads, yielding 15L-5P coverage after read pre-processing. Our simulation suggests that massively over-sequencing may not improve accuracy. Our scoring measures can be used generally to evaluate and compare results of simulated genome assemblies.

  18. Using nearly full-genome HIV sequence data improves phylogeny reconstruction in a simulated epidemic

    PubMed Central

    Yebra, Gonzalo; Hodcroft, Emma B.; Ragonnet-Cronin, Manon L.; Pillay, Deenan; Brown, Andrew J. Leigh; Fraser, Christophe; Kellam, Paul; de Oliveira, Tulio; Dennis, Ann; Hoppe, Anne; Kityo, Cissy; Frampton, Dan; Ssemwanga, Deogratius; Tanser, Frank; Keshani, Jagoda; Lingappa, Jairam; Herbeck, Joshua; Wawer, Maria; Essex, Max; Cohen, Myron S.; Paton, Nicholas; Ratmann, Oliver; Kaleebu, Pontiano; Hayes, Richard; Fidler, Sarah; Quinn, Thomas; Novitsky, Vladimir; Haywards, Andrew; Nastouli, Eleni; Morris, Steven; Clark, Duncan; Kozlakidis, Zisis

    2016-01-01

    HIV molecular epidemiology studies analyse viral pol gene sequences due to their availability, but whole genome sequencing allows to use other genes. We aimed to determine what gene(s) provide(s) the best approximation to the real phylogeny by analysing a simulated epidemic (created as part of the PANGEA_HIV project) with a known transmission tree. We sub-sampled a simulated dataset of 4662 sequences into different combinations of genes (gag-pol-env, gag-pol, gag, pol, env and partial pol) and sampling depths (100%, 60%, 20% and 5%), generating 100 replicates for each case. We built maximum-likelihood trees for each combination using RAxML (GTR + Γ), and compared their topologies to the corresponding true tree’s using CompareTree. The accuracy of the trees was significantly proportional to the length of the sequences used, with the gag-pol-env datasets showing the best performance and gag and partial pol sequences showing the worst. The lowest sampling depths (20% and 5%) greatly reduced the accuracy of tree reconstruction and showed high variability among replicates, especially when using the shortest gene datasets. In conclusion, using longer sequences derived from nearly whole genomes will improve the reliability of phylogenetic reconstruction. With low sample coverage, results can be highly variable, particularly when based on short sequences. PMID:28008945

  19. Using nearly full-genome HIV sequence data improves phylogeny reconstruction in a simulated epidemic.

    PubMed

    Yebra, Gonzalo; Hodcroft, Emma B; Ragonnet-Cronin, Manon L; Pillay, Deenan; Brown, Andrew J Leigh

    2016-12-23

    HIV molecular epidemiology studies analyse viral pol gene sequences due to their availability, but whole genome sequencing allows to use other genes. We aimed to determine what gene(s) provide(s) the best approximation to the real phylogeny by analysing a simulated epidemic (created as part of the PANGEA_HIV project) with a known transmission tree. We sub-sampled a simulated dataset of 4662 sequences into different combinations of genes (gag-pol-env, gag-pol, gag, pol, env and partial pol) and sampling depths (100%, 60%, 20% and 5%), generating 100 replicates for each case. We built maximum-likelihood trees for each combination using RAxML (GTR + Γ), and compared their topologies to the corresponding true tree's using CompareTree. The accuracy of the trees was significantly proportional to the length of the sequences used, with the gag-pol-env datasets showing the best performance and gag and partial pol sequences showing the worst. The lowest sampling depths (20% and 5%) greatly reduced the accuracy of tree reconstruction and showed high variability among replicates, especially when using the shortest gene datasets. In conclusion, using longer sequences derived from nearly whole genomes will improve the reliability of phylogenetic reconstruction. With low sample coverage, results can be highly variable, particularly when based on short sequences.

  20. Sim3C: simulation of Hi-C and Meta3C proximity ligation sequencing technologies.

    PubMed

    DeMaere, Matthew Z; Darling, Aaron E

    2018-02-01

    Chromosome conformation capture (3C) and Hi-C DNA sequencing methods have rapidly advanced our understanding of the spatial organization of genomes and metagenomes. Many variants of these protocols have been developed, each with their own strengths. Currently there is no systematic means for simulating sequence data from this family of sequencing protocols, potentially hindering the advancement of algorithms to exploit this new datatype. We describe a computational simulator that, given simple parameters and reference genome sequences, will simulate Hi-C sequencing on those sequences. The simulator models the basic spatial structure in genomes that is commonly observed in Hi-C and 3C datasets, including the distance-decay relationship in proximity ligation, differences in the frequency of interaction within and across chromosomes, and the structure imposed by cells. A means to model the 3D structure of randomly generated topologically associating domains is provided. The simulator considers several sources of error common to 3C and Hi-C library preparation and sequencing methods, including spurious proximity ligation events and sequencing error. We have introduced the first comprehensive simulator for 3C and Hi-C sequencing protocols. We expect the simulator to have use in testing of Hi-C data analysis algorithms, as well as more general value for experimental design, where questions such as the required depth of sequencing, enzyme choice, and other decisions can be made in advance in order to ensure adequate statistical power with respect to experimental hypothesis testing.

  1. The R package 'RLumModel': Simulating charge transfer in quartz

    NASA Astrophysics Data System (ADS)

    Friedrich, Johannes; Kreutzer, Sebastian; Schmidt, Christoph

    2017-04-01

    Kinetic models of quartz luminescence have gained an important role for predicting experimental results and for understanding charge transfers in (natural) quartz as well as for other dosimetric materials, e.g., Al2O3:C. We present the R package 'RLumModel', offering an easy-to-use tool for simulating quartz luminescence signals (TL, OSL, LM-OSL and RF) based on five integrated and published parameter sets as well as the possibility to use own parameters. Simulation commands can be created (a) using the Risø Sequence Editor, (b) a built-in SAR sequence generator or (c) self-explanatory keywords for customised sequences. Results can be analysed seamlessly using the R package 'Luminescence' along with a visualisation of concentrations of electrons and holes in every trap/centre as well as in the valence and conduction band during all stages of the simulation. Modelling luminescence signals can help understanding charge transfer processes occurring in nature or during measurements in the laboratory. This will lead to a better understanding of several processes concerning geoscientific questions, because quartz is the second most abundant mineral in the Earth's continental crust.

  2. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  3. Influence of DNA sequence on the structure of minicircles under torsional stress

    PubMed Central

    Wang, Qian; Irobalieva, Rossitza N.; Chiu, Wah; Schmid, Michael F.; Fogg, Jonathan M.; Zechiedrich, Lynn

    2017-01-01

    Abstract The sequence dependence of the conformational distribution of DNA under various levels of torsional stress is an important unsolved problem. Combining theory and coarse-grained simulations shows that the DNA sequence and a structural correlation due to topology constraints of a circle are the main factors that dictate the 3D structure of a 336 bp DNA minicircle under torsional stress. We found that DNA minicircle topoisomers can have multiple bend locations under high torsional stress and that the positions of these sharp bends are determined by the sequence, and by a positive mechanical correlation along the sequence. We showed that simulations and theory are able to provide sequence-specific information about individual DNA minicircles observed by cryo-electron tomography (cryo-ET). We provided a sequence-specific cryo-ET tomogram fitting of DNA minicircles, registering the sequence within the geometric features. Our results indicate that the conformational distribution of minicircles under torsional stress can be designed, which has important implications for using minicircle DNA for gene therapy. PMID:28609782

  4. Noncontrast Peripheral MRA with Spiral Echo Train Imaging

    PubMed Central

    Fielden, Samuel W.; Mugler, John P.; Hagspiel, Klaus D.; Norton, Patrick T.; Kramer, Christopher M.; Meyer, Craig H.

    2015-01-01

    Purpose To develop a spin echo train sequence with spiral readout gradients with improved artery–vein contrast for noncontrast angiography. Theory Venous T2 becomes shorter as the echo spacing is increased in echo train sequences, improving contrast. Spiral acquisitions, due to their data collection efficiency, facilitate long echo spacings without increasing scan times. Methods Bloch equation simulations were performed to determine optimal sequence parameters, and the sequence was applied in five volunteers. In two volunteers, the sequence was performed with a range of echo times and echo spacings to compare with the theoretical contrast behavior. A Cartesian version of the sequence was used to compare contrast appearance with the spiral sequence. Additionally, spiral parallel imaging was optionally used to improve image resolution. Results In vivo, artery–vein contrast properties followed the general shape predicted by simulations, and good results were obtained in all stations. Compared with a Cartesian implementation, the spiral sequence had superior artery–vein contrast, better spatial resolution (1.2 mm2 versus 1.5 mm2), and was acquired in less time (1.4 min versus 7.5 min). Conclusion The spiral spin echo train sequence can be used for flow-independent angiography to generate threedimensional angiograms of the periphery quickly and without the use of contrast agents. PMID:24753164

  5. The effects of computer-simulated experiments on high school biology students' problem-solving skills and achievement

    NASA Astrophysics Data System (ADS)

    Carmack, Gay Lynn Dickinson

    2000-10-01

    This two-part quasi-experimental repeated measures study examined whether computer simulated experiments have an effect on the problem solving skills of high school biology students in a school-within-a-school magnet program. Specifically, the study identified episodes in a simulation sequence where problem solving skills improved. In the Fall academic semester, experimental group students (n = 30) were exposed to two simulations: CaseIt! and EVOLVE!. Control group students participated in an internet research project and a paper Hardy-Weinberg activity. In the Spring academic semester, experimental group students were exposed to three simulations: Genetics Construction Kit, CaseIt! and EVOLVE! . Spring control group students participated in a Drosophila lab, an internet research project, and Advanced Placement lab 8. Results indicate that the Fall and Spring experimental groups experienced significant gains in scientific problem solving after the second simulation in the sequence. These gains were independent of the simulation sequence or the amount of time spent on the simulations. These gains were significantly greater than control group scores in the Fall. The Spring control group significantly outscored all other study groups on both pretest measures. Even so, the Spring experimental group problem solving performance caught up to the Spring control group performance after the third simulation. There were no significant differences between control and experimental groups on content achievement. Results indicate that CSE is as effective as traditional laboratories in promoting scientific problem solving and that CSE is a useful tool for improving students' scientific problem solving skills. Moreover, retention of problem solving skills is enhanced by utilizing more than one simulation.

  6. A Naval Marksmanship Training Transfer Study: The Use of Indoor Simulated Marksmanship Trainers to Train for Live Fire

    DTIC Science & Technology

    2012-03-01

    on the standard Navy Handgun Qualification Course. Results partially supported the hypotheses. The simulation group showed greater improvement in MPI...standard Navy Handgun Qualification Course. Results partially supported the hypotheses. The simulation group showed greater improvement in MPI than the...14 3. Navy Handgun Qualification Course Firing Sequence ..................15 F. PROCEDURES

  7. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  8. In silico characterization and analysis of RTBP1 and NgTRF1 protein through MD simulation and molecular docking - A comparative study.

    PubMed

    Mukherjee, Koel; Pandey, Dev Mani; Vidyarthi, Ambarish Saran

    2015-02-06

    Gaining access to sequence and structure information of telomere binding proteins helps in understanding the essential biological processes involve in conserved sequence specific interaction between DNA and the proteins. Rice telomere binding protein (RTBP1) and Nicotiana glutinosa telomere repeat binding factor (NgTRF1) are helix turn helix motif type of proteins that plays role in telomeric DNA protection and length regulation. Both the proteins share same type of domain but till now there is very less communication on the in silico studies of these complete proteins.Here we intend to do a comparative study between two proteins through modeling of the complete proteins, physiochemical characterization, MD simulation and DNA-protein docking. I-TASSER and CLC protein work bench was performed to find out the protein 3D structure as well as the different parameters to characterize the proteins. MD simulation was completed by GROMOS forcefield of GROMACS for 10 ns of time stretch. The simulated 3D structures were docked with template DNA (3D DNA modeled through 3D-DART) of TTTAGGG conserved sequence motif using HADDOCK web server.Digging up all the facts about the proteins it was reveled that around 120 amino acids in the tail part was showing a good sequence similarity between the proteins. Molecular modeling, sequence characterization and secondary structure prediction also indicates the similarity between the protein's structure and sequence. The result of MD simulation highlights on the RMSD, RMSF, Rg, PCA and Energy plots which also conveys the similar type of motional behavior between them. The best complex formation for both the proteins in docking result also indicates for the first interaction site which is mainly the helix3 region of the DNA binding domain. The overall computational analysis reveals that RTBP1 and NgTRF1 proteins display good amount of similarity in their physicochemical properties, structure, dynamics and binding mode.

  9. In Silico Characterization and Analysis of RTBP1 and NgTRF1 Protein Through MD Simulation and Molecular Docking: A Comparative Study.

    PubMed

    Mukherjee, Koel; Pandey, Dev Mani; Vidyarthi, Ambarish Saran

    2015-09-01

    Gaining access to sequence and structure information of telomere-binding proteins helps in understanding the essential biological processes involve in conserved sequence-specific interaction between DNA and the proteins. Rice telomere-binding protein (RTBP1) and Nicotiana glutinosa telomere repeat binding factor (NgTRF1) are helix-turn-helix motif type of proteins that plays role in telomeric DNA protection and length regulation. Both the proteins share same type of domain, but till now there is very less communication on the in silico studies of these complete proteins. Here we intend to do a comparative study between two proteins through modeling of the complete proteins, physiochemical characterization, MD simulation and DNA-protein docking. I-TASSER and CLC protein work bench was performed to find out the protein 3D structure as well as the different parameters to characterize the proteins. MD simulation was completed by GROMOS forcefield of GROMACS for 10 ns of time stretch. The simulated 3D structures were docked with template DNA (3D DNA modeled through 3D-DART) of TTTAGGG conserved sequence motif using HADDOCK Web server. By digging up all the facts about the proteins, it was revealed that around 120 amino acids in the tail part were showing a good sequence similarity between the proteins. Molecular modeling, sequence characterization and secondary structure prediction also indicate the similarity between the protein's structure and sequence. The result of MD simulation highlights on the RMSD, RMSF, Rg, PCA and energy plots which also conveys the similar type of motional behavior between them. The best complex formation for both the proteins in docking result also indicates for the first interaction site which is mainly the helix3 region of the DNA-binding domain. The overall computational analysis reveals that RTBP1 and NgTRF1 proteins display good amount of similarity in their physicochemical properties, structure, dynamics and binding mode.

  10. Flight Experiment Investigation of General Aviation Self-Separation and Sequencing Tasks

    NASA Technical Reports Server (NTRS)

    Murdoch, Jennifer L.; Ramiscal, Ermin R.; McNabb, Jennifer L.; Bussink, Frank J. L.

    2005-01-01

    A new flight operations concept called Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) was developed to increase capacity during Instrument Meteorological Conditions (IMC) at non-towered, non-radar airports by enabling concurrent operations of multiple aircraft. One aspect of this concept involves having pilots safely self-separate from other aircraft during approaches into these airports using appropriate SATS HVO procedures. A flight experiment was conducted to determine if instrument-rated general aviation (GA) pilots could self-separate and sequence their ownship aircraft, while following a simulated aircraft, into a non-towered, non-radar airport during simulated IMC. Six GA pilots' workload levels and abilities to perform self-separation and sequencing procedures while flying a global positioning system (GPS) instrument approach procedure were examined. The results showed that the evaluation pilots maintained at least the minimum specified separation between their ownship aircraft and simulated traffic and maintained their assigned landing sequence 100-percent of the time. Neither flight path deviations nor subjective workload assessments were negatively impacted by the additional tasks of self-separating and sequencing during these instrument approaches.

  11. Population genetics and molecular evolution of DNA sequences in transposable elements. I. A simulation framework.

    PubMed

    Kijima, T E; Innan, Hideki

    2013-11-01

    A population genetic simulation framework is developed to understand the behavior and molecular evolution of DNA sequences of transposable elements. Our model incorporates random transposition and excision of transposable element (TE) copies, two modes of selection against TEs, and degeneration of transpositional activity by point mutations. We first investigated the relationships between the behavior of the copy number of TEs and these parameters. Our results show that when selection is weak, the genome can maintain a relatively large number of TEs, but most of them are less active. In contrast, with strong selection, the genome can maintain only a limited number of TEs but the proportion of active copies is large. In such a case, there could be substantial fluctuations of the copy number over generations. We also explored how DNA sequences of TEs evolve through the simulations. In general, active copies form clusters around the original sequence, while less active copies have long branches specific to themselves, exhibiting a star-shaped phylogeny. It is demonstrated that the phylogeny of TE sequences could be informative to understand the dynamics of TE evolution.

  12. Neutrality and evolvability of designed protein sequences

    NASA Astrophysics Data System (ADS)

    Bhattacherjee, Arnab; Biswas, Parbati

    2010-07-01

    The effect of foldability on protein’s evolvability is analyzed by a two-prong approach consisting of a self-consistent mean-field theory and Monte Carlo simulations. Theory and simulation models representing protein sequences with binary patterning of amino acid residues compatible with a particular foldability criteria are used. This generalized foldability criterion is derived using the high temperature cumulant expansion approximating the free energy of folding. The effect of cumulative point mutations on these designed proteins is studied under neutral condition. The robustness, protein’s ability to tolerate random point mutations is determined with a selective pressure of stability (ΔΔG) for the theory designed sequences, which are found to be more robust than that of Monte Carlo and mean-field-biased Monte Carlo generated sequences. The results show that this foldability criterion selects viable protein sequences more effectively compared to the Monte Carlo method, which has a marked effect on how the selective pressure shapes the evolutionary sequence space. These observations may impact de novo sequence design and its applications in protein engineering.

  13. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  14. Memetic algorithms for de novo motif-finding in biomedical sequences.

    PubMed

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  16. The use of molecular dynamics simulations to evaluate the DNA sequence-selectivity of G-A cross-linking PBD-duocarmycin dimers.

    PubMed

    Jackson, Paul J M; Rahman, Khondaker M; Thurston, David E

    2017-01-01

    The pyrrolobenzodiazepine (PBD) and duocarmycin families are DNA-interactive agents that covalently bond to guanine (G) and adenine (A) bases, respectively, and that have been joined together to create synthetic dimers capable of cross-linking G-G, A-A, and G-A bases. Three G-A alkylating dimers have been reported in publications to date, with defined DNA-binding sites proposed for two of them. In this study we have used molecular dynamics simulations to elucidate preferred DNA-binding sites for the three published molecular types. For the PBD-CPI dimer UTA-6026 (1), our simulations correctly predicted its favoured binding site (i.e., 5'-C(G)AATTA-3') as identified by DNA cleavage studies. However, for the PBD-CI molecule ('Compound 11', 3), we were unable to reconcile the results of our simulations with the reported preferred cross-linking sequence (5'-ATTTTCC(G)-3'). We found that the molecule is too short to span the five base pairs between the A and G bases as claimed, but should target instead a sequence such as 5'-ATTTC(G)-3' with two less base pairs between the reacting G and A residues. Our simulation results for this hybrid dimer are also in accord with the very low interstrand cross-linking and in vitro cytotoxicity activities reported for it. Although a preferred cross-linking sequence was not reported for the third hybrid dimer ('27eS', 2), our simulations predict that it should span two base pairs between covalently reacting G and A bases (e.g., 5'-GTAT(A)-3'). Copyright © 2016. Published by Elsevier Ltd.

  17. Formation of specific amino acid sequences during carbodiimide-mediated condensation of amino acids in aqueous solution, and computer-simulated sequence generation

    NASA Astrophysics Data System (ADS)

    Hartmann, Jürgen; Nawroth, Thomas; Dose, Klaus

    1984-12-01

    Carbodiimide-mediated peptide synthesis in aqueous solution has been studied with respect to self-ordering of amino acids. The copolymerisation of amino acids in the presence of glutamic acid or pyroglutamic acid leads to short pyroglutamyl peptides. Without pyroglutamic acid the formation of higher polymers is favoured. The interactions of the amino acids and the peptides, however, are very complex. Therefore, the experimental results are rather difficult to explain. Some of the experimental results, however, can be explained with the aid of computer simulation programs. Regarding only the tripeptide fraction the copolymerisation of pyroGlu, Ala and Leu, as well as the simulated copolymerisation lead to pyroGlu-Ala-Leu as the main reaction product. The amino acid composition of the insoluble peptides formed during the copolymerisation of Ser, Gly, Ala, Val, Phe, Leu and Ile corresponds in part to the computer-simulated copolymerisation data.

  18. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. COMPUTER SIMULATION STUDY OF AMYLOID FIBRIL FORMATION BY PALINDROMIC SEQUENCES IN PRION PEPTIDES

    PubMed Central

    Wagoner, Victoria; Cheon, Mookyung; Chang, Iksoo; Hall, Carol

    2011-01-01

    We simulate the aggregation of large systems containing palindromic peptides from the Syrian hamster prion protein SHaPrP 113–120 (AGAAAAGA) and the mouse prion protein MoPrP 111–120 (VAGAAAAGAV) and eight sequence variations: GAAAAAAG, (AG)4, A8, GAAAGAAA, A10, V10, GAVAAAAVAG, and VAVAAAAVAV The first two peptides are thought to act as the Velcro that holds the parent prion proteins together in amyloid structures and can form fibrils themselves. Kinetic events along the fibrillization pathway influence the types of structures that occur and variations in the sequence affect aggregation kinetics and fibrillar structure. Discontinuous molecular dynamics simulations using the PRIME20 force field are performed on systems containing 48 peptides starting from a random coil configuration. Depending on the sequence, fibrillar structures form spontaneously over a range of temperatures, below which amorphous aggregates form and above which no aggregation occurs. AGAAAAGA forms well organized fibrillar structures whereas VAGAAAAGAV forms less well organized structures that are partially fibrillar and partially amorphous. The degree of order in the fibrillar structure stems in part from the types of kinetic events leading up to its formation, with AGAAAAGA forming less amorphous structures early in the simulation than VAGAAAAGAV. The ability to form fibrils increases as the chain length and the length of the stretch of hydrophobic residues increase. However as the hydrophobicity of the sequence increases, the ability to form well-ordered structures decreases. Thus, longer hydrophobic sequences form slightly disordered aggregates that are partially fibrillar and partially amorphous. Subtle changes in sequence result in slightly different fibril structures. PMID:21557317

  20. Time-dependent Data System (TDDS); an interactive program to assemble, manage, and appraise input data and numerical output of flow/transport simulation models

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.; Baltzer, R.A.

    1996-01-01

    A system of functional utilities and computer routines, collectively identified as the Time-Dependent Data System CI DDS), has been developed and documented by the U.S. Geological Survey. The TDDS is designed for processing time sequences of discrete, fixed-interval, time-varying geophysical data--in particular, hydrologic data. Such data include various, dependent variables and related parameters typically needed as input for execution of one-, two-, and three-dimensional hydrodynamic/transport and associated water-quality simulation models. Such data can also include time sequences of results generated by numerical simulation models. Specifically, TDDS provides the functional capabilities to process, store, retrieve, and compile data in a Time-Dependent Data Base (TDDB) in response to interactive user commands or pre-programmed directives. Thus, the TDDS, in conjunction with a companion TDDB, provides a ready means for processing, preparation, and assembly of time sequences of data for input to models; collection, categorization, and storage of simulation results from models; and intercomparison of field data and simulation results. The TDDS can be used to edit and verify prototype, time-dependent data to affirm that selected sequences of data are accurate, contiguous, and appropriate for numerical simulation modeling. It can be used to prepare time-varying data in a variety of formats, such as tabular lists, sequential files, arrays, graphical displays, as well as line-printer plots of single or multiparameter data sets. The TDDB is organized and maintained as a direct-access data base by the TDDS, thus providing simple, yet efficient, data management and access. A single, easily used, program interface that provides all access to and from a particular TDDB is available for use directly within models, other user-provided programs, and other data systems. This interface, together with each major functional utility of the TDDS, is described and documented in this report.

  1. Solving Assembly Sequence Planning using Angle Modulated Simulated Kalman Filter

    NASA Astrophysics Data System (ADS)

    Mustapa, Ainizar; Yusof, Zulkifli Md.; Adam, Asrul; Muhammad, Badaruddin; Ibrahim, Zuwairie

    2018-03-01

    This paper presents an implementation of Simulated Kalman Filter (SKF) algorithm for optimizing an Assembly Sequence Planning (ASP) problem. The SKF search strategy contains three simple steps; predict-measure-estimate. The main objective of the ASP is to determine the sequence of component installation to shorten assembly time or save assembly costs. Initially, permutation sequence is generated to represent each agent. Each agent is then subjected to a precedence matrix constraint to produce feasible assembly sequence. Next, the Angle Modulated SKF (AMSKF) is proposed for solving ASP problem. The main idea of the angle modulated approach in solving combinatorial optimization problem is to use a function, g(x), to create a continuous signal. The performance of the proposed AMSKF is compared against previous works in solving ASP by applying BGSA, BPSO, and MSPSO. Using a case study of ASP, the results show that AMSKF outperformed all the algorithms in obtaining the best solution.

  2. Simulator evaluation of the final approach spacing tool

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Erzberger, Heinz; Green, Steven M.

    1990-01-01

    The design and simulator evaluation of an automation tool for assisting terminal radar approach controllers in sequencing and spacing traffic onto the final approach course is described. The automation tool, referred to as the Final Approach Spacing Tool (FAST), displays speed and heading advisories for arrivals as well as sequencing information on the controller's radar display. The main functional elements of FAST are a scheduler that schedules and sequences the traffic, a 4-D trajectory synthesizer that generates the advisories, and a graphical interface that displays the information to the controller. FAST was implemented on a high performance workstation. It can be operated as a stand-alone in the Terminal Radar Approach Control (TRACON) Facility or as an element of a system integrated with automation tools in the Air Route Traffic Control Center (ARTCC). FAST was evaluated by experienced TRACON controllers in a real-time air traffic control simulation. Simulation results show that FAST significantly reduced controller workload and demonstrated a potential for an increase in landing rate.

  3. Severe Accident Sequence Analysis Program: Anticipated transient without scram simulations for Browns Ferry Nuclear Plant Unit 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dallman, R J; Gottula, R C; Holcomb, E E

    1987-05-01

    An analysis of five anticipated transients without scram (ATWS) was conducted at the Idaho National Engineering Laboratory (INEL). The five detailed deterministic simulations of postulated ATWS sequences were initiated from a main steamline isolation valve (MSIV) closure. The subject of the analysis was the Browns Ferry Nuclear Plant Unit 1, a boiling water reactor (BWR) of the BWR/4 product line with a Mark I containment. The simulations yielded insights to the possible consequences resulting from a MSIV closure ATWS. An evaluation of the effects of plant safety systems and operator actions on accident progression and mitigation is presented.

  4. An Integrated Design and Development System for Graphics Simulation.

    ERIC Educational Resources Information Center

    Richardson, J. Jeffrey

    In the training of maintenance and operations technicians, three enhancements to a basic, straightforward, fixed-sequence simulation system can be useful. The primary advantage of the resultant system is that the principal object of simulation is the task to be performed, which includes both the planning knowledge and the equipment actions…

  5. Advanced Stirling Convertor Dynamic Test Approach and Results

    NASA Technical Reports Server (NTRS)

    Meer, David W.; Hill, Dennis; Ursic, Joseph J.

    2010-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin Corporation (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Convertors (ASC) at NASA GRC undergo a vibration test sequence intended to simulate the vibration history that an ASC would experience when used in an ASRG for a space mission. This sequence includes testing at workmanship and flight acceptance levels interspersed with periods of extended operation to simulate prefueling and post fueling. The final step in the test sequence utilizes additional testing at flight acceptance levels to simulate launch. To better replicate the acceleration profile seen by an ASC incorporated into an ASRG, the input spectra used in testing the convertors was modified based on dynamic testing of the ASRG Engineering Unit (ASRG EU) at LM. This paper outlines the overall test approach, summarizes the test results from the ASRG EU, describes the incorporation of those results into the test approach, and presents the results of applying the test approach to the ASC-1 #3 and #4 convertors. The test results include data from several accelerometers mounted on the convertors as well as the piston position and output power variables.

  6. Topological frustration in βα-repeat proteins: sequence diversity modulates the conserved folding mechanisms of α/β/α sandwich proteins

    PubMed Central

    Hills, Ronald D.; Kathuria, Sagar V.; Wallace, Louise A.; Day, Iain J.; Brooks, Charles L.; Matthews, C. Robert

    2010-01-01

    The thermodynamic hypothesis of Anfinsen postulates that structures and stabilities of globular proteins are determined by their amino acid sequences. Chain topology, however, is known to influence the folding reaction, in that motifs with a preponderance of local interactions typically fold more rapidly than those with a larger fraction of non-local interactions. Together, the topology and sequence can modulate the energy landscape and influence the rate at which the protein folds to the native conformation. To explore the relationship of sequence and topology in the folding of βα–repeat proteins, which are dominated by local interactions, a combined experimental and simulation analysis was performed on two members of the flavodoxin-like, α/β/α sandwich fold. Spo0F and the N-terminal receiver domain of NtrC (NT-NtrC) have similar topologies but low sequence identity, enabling a test of the effects of sequence on folding. Experimental results demonstrated that both response-regulator proteins fold via parallel channels through highly structured sub-millisecond intermediates before accessing their cis prolyl peptide bond-containing native conformations. Global analysis of the experimental results preferentially places these intermediates off the productive folding pathway. Sequence-sensitive Gō-model simulations conclude that frustration in the folding in Spo0F, corresponding to the appearance of the off-pathway intermediate, reflects competition for intra-subdomain van der Waals contacts between its N- and C-terminal subdomains. The extent of transient, premature structure appears to correlate with the number of isoleucine, leucine and valine (ILV) side-chains that form a large sequence-local cluster involving the central β-sheet and helices α2, α3 and α4. The failure to detect the off-pathway species in the simulations of NT-NtrC may reflect the reduced number of ILV side-chains in its corresponding hydrophobic cluster. The location of the hydrophobic clusters in the structure may also be related to the differing functional properties of these response regulators. Comparison with the results of previous experimental and simulation analyses on the homologous CheY argues that prematurely-folded unproductive intermediates are a common property of the βα-repeat motif. PMID:20226790

  7. A heterogeneous computing environment for simulating astrophysical fluid flows

    NASA Technical Reports Server (NTRS)

    Cazes, J.

    1994-01-01

    In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.

  8. Brownian dynamics simulations of sequence-dependent duplex denaturation in dynamically superhelical DNA

    NASA Astrophysics Data System (ADS)

    Mielke, Steven P.; Grønbech-Jensen, Niels; Krishnan, V. V.; Fink, William H.; Benham, Craig J.

    2005-09-01

    The topological state of DNA in vivo is dynamically regulated by a number of processes that involve interactions with bound proteins. In one such process, the tracking of RNA polymerase along the double helix during transcription, restriction of rotational motion of the polymerase and associated structures, generates waves of overtwist downstream and undertwist upstream from the site of transcription. The resulting superhelical stress is often sufficient to drive double-stranded DNA into a denatured state at locations such as promoters and origins of replication, where sequence-specific duplex opening is a prerequisite for biological function. In this way, transcription and other events that actively supercoil the DNA provide a mechanism for dynamically coupling genetic activity with regulatory and other cellular processes. Although computer modeling has provided insight into the equilibrium dynamics of DNA supercoiling, to date no model has appeared for simulating sequence-dependent DNA strand separation under the nonequilibrium conditions imposed by the dynamic introduction of torsional stress. Here, we introduce such a model and present results from an initial set of computer simulations in which the sequences of dynamically superhelical, 147 base pair DNA circles were systematically altered in order to probe the accuracy with which the model can predict location, extent, and time of stress-induced duplex denaturation. The results agree both with well-tested statistical mechanical calculations and with available experimental information. Additionally, we find that sites susceptible to denaturation show a propensity for localizing to supercoil apices, suggesting that base sequence determines locations of strand separation not only through the energetics of interstrand interactions, but also by influencing the geometry of supercoiling.

  9. Brownian dynamics simulations of sequence-dependent duplex denaturation in dynamically superhelical DNA.

    PubMed

    Mielke, Steven P; Grønbech-Jensen, Niels; Krishnan, V V; Fink, William H; Benham, Craig J

    2005-09-22

    The topological state of DNA in vivo is dynamically regulated by a number of processes that involve interactions with bound proteins. In one such process, the tracking of RNA polymerase along the double helix during transcription, restriction of rotational motion of the polymerase and associated structures, generates waves of overtwist downstream and undertwist upstream from the site of transcription. The resulting superhelical stress is often sufficient to drive double-stranded DNA into a denatured state at locations such as promoters and origins of replication, where sequence-specific duplex opening is a prerequisite for biological function. In this way, transcription and other events that actively supercoil the DNA provide a mechanism for dynamically coupling genetic activity with regulatory and other cellular processes. Although computer modeling has provided insight into the equilibrium dynamics of DNA supercoiling, to date no model has appeared for simulating sequence-dependent DNA strand separation under the nonequilibrium conditions imposed by the dynamic introduction of torsional stress. Here, we introduce such a model and present results from an initial set of computer simulations in which the sequences of dynamically superhelical, 147 base pair DNA circles were systematically altered in order to probe the accuracy with which the model can predict location, extent, and time of stress-induced duplex denaturation. The results agree both with well-tested statistical mechanical calculations and with available experimental information. Additionally, we find that sites susceptible to denaturation show a propensity for localizing to supercoil apices, suggesting that base sequence determines locations of strand separation not only through the energetics of interstrand interactions, but also by influencing the geometry of supercoiling.

  10. Advanced Stirling Convertor Dynamic Test Approach and Results

    NASA Technical Reports Server (NTRS)

    Meer, David W.; Hill, Dennis; Ursic, Joseph

    2009-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. As part of the extended operation testing of this power system, the Advanced Stirling Converters (ASC) at NASA John H. Glenn Research Center undergo a vibration test sequence intended to simulate the vibration history of an ASC used in an ASRG for a space mission. This sequence includes testing at Workmanship and Flight Acceptance levels interspersed with periods of extended operation to simulate pre and post fueling. The final step in the test sequence utilizes additional testing at Flight Acceptance levels to simulate launch. To better replicate the acceleration profile seen by an ASC incorporated into an ASRG, the input spectra used in testing the convertors was modified based on dynamic testing of the ASRG Engineering Unit ( ASRG-EU) at Lockheed Martin. This paper presents the vibration test plan for current and future ASC units, including the modified input spectra, and the results of recent tests using these spectra. The test results include data from several accelerometers mounted on the convertors as well as the piston position and output power variables.

  11. Characterizing novel endogenous retroviruses from genetic variation inferred from short sequence reads

    PubMed Central

    Mourier, Tobias; Mollerup, Sarah; Vinner, Lasse; Hansen, Thomas Arn; Kjartansdóttir, Kristín Rós; Guldberg Frøslev, Tobias; Snogdal Boutrup, Torsten; Nielsen, Lars Peter; Willerslev, Eske; Hansen, Anders J.

    2015-01-01

    From Illumina sequencing of DNA from brain and liver tissue from the lion, Panthera leo, and tumor samples from the pike-perch, Sander lucioperca, we obtained two assembled sequence contigs with similarity to known retroviruses. Phylogenetic analyses suggest that the pike-perch retrovirus belongs to the epsilonretroviruses, and the lion retrovirus to the gammaretroviruses. To determine if these novel retroviral sequences originate from an endogenous retrovirus or from a recently integrated exogenous retrovirus, we assessed the genetic diversity of the parental sequences from which the short Illumina reads are derived. First, we showed by simulations that we can robustly infer the level of genetic diversity from short sequence reads. Second, we find that the measures of nucleotide diversity inferred from our retroviral sequences significantly exceed the level observed from Human Immunodeficiency Virus infections, prompting us to conclude that the novel retroviruses are both of endogenous origin. Through further simulations, we rule out the possibility that the observed elevated levels of nucleotide diversity are the result of co-infection with two closely related exogenous retroviruses. PMID:26493184

  12. High performance MRI simulations of motion on multi-GPU systems

    PubMed Central

    2014-01-01

    Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation of realistic motion models, such as cardiac motion, respiratory motion and flow may benefit the design and optimization of existing or new MR pulse sequences, protocols and algorithms, which examine motion related MR applications. PMID:24996972

  13. A strategy for detecting the conservation of folding-nucleus residues in protein superfamilies.

    PubMed

    Michnick, S W; Shakhnovich, E

    1998-01-01

    Nucleation-growth theory predicts that fast-folding peptide sequences fold to their native structure via structures in a transition-state ensemble that share a small number of native contacts (the folding nucleus). Experimental and theoretical studies of proteins suggest that residues participating in folding nuclei are conserved among homologs. We attempted to determine if this is true in proteins with highly diverged sequences but identical folds (superfamilies). We describe a strategy based on comparisons of residue conservation in natural superfamily sequences with simulated sequences (generated with a Monte-Carlo sequence design strategy) for the same proteins. The basic assumptions of the strategy were that natural sequences will conserve residues needed for folding and stability plus function, the simulated sequences contain no functional conservation, and nucleus residues make native contacts with each other. Based on these assumptions, we identified seven potential nucleus residues in ubiquitin superfamily members. Non-nucleus conserved residues were also identified; these are proposed to be involved in stabilizing native interactions. We found that all superfamily members conserved the same potential nucleus residue positions, except those for which the structural topology is significantly different. Our results suggest that the conservation of the nucleus of a specific fold can be predicted by comparing designed simulated sequences with natural highly diverged sequences that fold to the same structure. We suggest that such a strategy could be used to help plan protein folding and design experiments, to identify new superfamily members, and to subdivide superfamilies further into classes having a similar folding mechanism.

  14. XS: a FASTQ read simulator.

    PubMed

    Pratas, Diogo; Pinho, Armando J; Rodrigues, João M O S

    2014-01-16

    The emerging next-generation sequencing (NGS) is bringing, besides the natural huge amounts of data, an avalanche of new specialized tools (for analysis, compression, alignment, among others) and large public and private network infrastructures. Therefore, a direct necessity of specific simulation tools for testing and benchmarking is rising, such as a flexible and portable FASTQ read simulator, without the need of a reference sequence, yet correctly prepared for producing approximately the same characteristics as real data. We present XS, a skilled FASTQ read simulation tool, flexible, portable (does not need a reference sequence) and tunable in terms of sequence complexity. It has several running modes, depending on the time and memory available, and is aimed at testing computing infrastructures, namely cloud computing of large-scale projects, and testing FASTQ compression algorithms. Moreover, XS offers the possibility of simulating the three main FASTQ components individually (headers, DNA sequences and quality-scores). XS provides an efficient and convenient method for fast simulation of FASTQ files, such as those from Ion Torrent (currently uncovered by other simulators), Roche-454, Illumina and ABI-SOLiD sequencing machines. This tool is publicly available at http://bioinformatics.ua.pt/software/xs/.

  15. MySSP: Non-stationary evolutionary sequence simulation, including indels

    PubMed Central

    Rosenberg, Michael S.

    2007-01-01

    MySSP is a new program for the simulation of DNA sequence evolution across a phylogenetic tree. Although many programs are available for sequence simulation, MySSP is unique in its inclusion of indels, flexibility in allowing for non-stationary patterns, and output of ancestral sequences. Some of these features can individually be found in existing programs, but have not all have been previously available in a single package. PMID:19325855

  16. Simulation-Based Evaluation of Learning Sequences for Instructional Technologies

    ERIC Educational Resources Information Center

    McEneaney, John E.

    2016-01-01

    Instructional technologies critically depend on systematic design, and learning hierarchies are a commonly advocated tool for designing instructional sequences. But hierarchies routinely allow numerous sequences and choosing an optimal sequence remains an unsolved problem. This study explores a simulation-based approach to modeling learning…

  17. Numerical modeling of the traction process in the treatment for Pierre-Robin Sequence.

    PubMed

    Słowiński, Jakub J; Czarnecka, Aleksandra

    2016-10-01

    The goal of this numerical study was to identify the results of modulated growth simulation of the mandibular bone during traction in Pierre-Robin Sequence (PRS) treatment. Numerical simulation was conducted in the Ansys 16.2 environment. Two FEM (finite elements method) models of a newborn's mandible (a spatial and a flat model) were developed. The procedure simulated a 20-week traction period. The adopted growth measure was mandibular length increase, defined as the distance between the Co-Pog anatomic points used in cephalometric analysis. The simulation calculations conducted on the developed models showed that modulation had a significant influence on the pace of bone growth. In each of the analyzed cases, growth modulation resulted in an increase in pace. The largest value of increase was 6.91 mm. The modulated growth with the most beneficial load variant increased the basic value of the growth by as much as 24.6%, and growth with the least beneficial variant increased by 7.4%. Traction is a simple, minimally invasive and inexpensive procedure. The proposed algorithm may enable the development of a helpful forecasting tool, which could be of real use to doctors working on Pierre-Robin Sequence and other mandibular deformations in children. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Recombination enhances HIV-1 envelope diversity by facilitating the survival of latent genomic fragments in the plasma virus population

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Immonen, Taina T.; Conway, Jessica M.; Romero-Severson, Ethan O.

    HIV-1 is subject to immune pressure exerted by the host, giving variants that escape the immune response an advantage. Virus released from activated latent cells competes against variants that have continually evolved and adapted to host immune pressure. Nevertheless, there is increasing evidence that virus displaying a signal of latency survives in patient plasma despite having reduced fitness due to long-term immune memory. We investigated the survival of virus with latent envelope genomic fragments by simulating within-host HIV-1 sequence evolution and the cycling of viral lineages in and out of the latent reservoir. Our model incorporates a detailed mutation processmore » including nucleotide substitution, recombination, latent reservoir dynamics, diversifying selection pressure driven by the immune response, and purifying selection pressure asserted by deleterious mutations. We evaluated the ability of our model to capture sequence evolution in vivo by comparing our simulated sequences to HIV-1 envelope sequence data from 16 HIV-infected untreated patients. Empirical sequence divergence and diversity measures were qualitatively and quantitatively similar to those of our simulated HIV-1 populations, suggesting that our model invokes realistic trends of HIV-1 genetic evolution. Moreover, reconstructed phylogenies of simulated and patient HIV-1 populations showed similar topological structures. Our simulation results suggest that recombination is a key mechanism facilitating the persistence of virus with latent envelope genomic fragments in the productively infected cell population. Recombination increased the survival probability of latent virus forms approximately 13-fold. Prevalence of virus with latent fragments in productively infected cells was observed in only 2% of simulations when we ignored recombination, while the proportion increased to 27% of simulations when we allowed recombination. We also found that the selection pressures exerted by different fitness landscapes influenced the shape of phylogenies, diversity trends, and survival of virus with latent genomic fragments. Furthermore, our model predicts that the persistence of latent genomic fragments from multiple different ancestral origins increases sequence diversity in plasma for reasonable fitness landscapes.« less

  19. Recombination enhances HIV-1 envelope diversity by facilitating the survival of latent genomic fragments in the plasma virus population

    DOE PAGES

    Immonen, Taina T.; Conway, Jessica M.; Romero-Severson, Ethan O.; ...

    2015-12-22

    HIV-1 is subject to immune pressure exerted by the host, giving variants that escape the immune response an advantage. Virus released from activated latent cells competes against variants that have continually evolved and adapted to host immune pressure. Nevertheless, there is increasing evidence that virus displaying a signal of latency survives in patient plasma despite having reduced fitness due to long-term immune memory. We investigated the survival of virus with latent envelope genomic fragments by simulating within-host HIV-1 sequence evolution and the cycling of viral lineages in and out of the latent reservoir. Our model incorporates a detailed mutation processmore » including nucleotide substitution, recombination, latent reservoir dynamics, diversifying selection pressure driven by the immune response, and purifying selection pressure asserted by deleterious mutations. We evaluated the ability of our model to capture sequence evolution in vivo by comparing our simulated sequences to HIV-1 envelope sequence data from 16 HIV-infected untreated patients. Empirical sequence divergence and diversity measures were qualitatively and quantitatively similar to those of our simulated HIV-1 populations, suggesting that our model invokes realistic trends of HIV-1 genetic evolution. Moreover, reconstructed phylogenies of simulated and patient HIV-1 populations showed similar topological structures. Our simulation results suggest that recombination is a key mechanism facilitating the persistence of virus with latent envelope genomic fragments in the productively infected cell population. Recombination increased the survival probability of latent virus forms approximately 13-fold. Prevalence of virus with latent fragments in productively infected cells was observed in only 2% of simulations when we ignored recombination, while the proportion increased to 27% of simulations when we allowed recombination. We also found that the selection pressures exerted by different fitness landscapes influenced the shape of phylogenies, diversity trends, and survival of virus with latent genomic fragments. Furthermore, our model predicts that the persistence of latent genomic fragments from multiple different ancestral origins increases sequence diversity in plasma for reasonable fitness landscapes.« less

  20. In the Absence of Writhe, DNA Relieves Torsional Stress with Localized, Sequence-Dependent Structural Failure to Preserve B-form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randall, Graham L.; Zechiedrich, E. L.; Pettitt, Bernard M.

    2009-09-01

    To understand how underwinding and overwinding the DNA helix affects its structure, we simulated 19 independent DNA systems with fixed degrees of twist using molecular dynamics in a system that does not allow writhe. Underwinding DNA induced spontaneous, sequence-dependent base flipping and local denaturation, while overwinding DNA induced the formation of Pauling-like DNA (P-DNA). The winding resulted in a bimodal state simultaneously including local structural failure and B-form DNA for both underwinding and extreme overwinding. Our simulations suggest that base flipping and local denaturation may provide a landscape influencing protein recognition of DNA sequence to affect, for examples, replication, transcriptionmore » and recombination. Additionally, our findings help explain results from singlemolecule experiments and demonstrate that elastic rod models are strictly valid on average only for unstressed or overwound DNA up to P-DNA formation. Finally, our data support a model in which base flipping can result from torsional stress.« less

  1. Elongation Factor-Tu (EF-Tu) proteins structural stability and bioinformatics in ancestral gene reconstruction

    NASA Astrophysics Data System (ADS)

    Dehipawala, Sunil; Nguyen, A.; Tremberger, G.; Cheung, E.; Schneider, P.; Lieberman, D.; Holden, T.; Cheung, T.

    2013-09-01

    A paleo-experimental evolution report on elongation factor EF-Tu structural stability results has provided an opportunity to rewind the tape of life using the ancestral protein sequence reconstruction modeling approach; consistent with the book of life dogma in current biology and being an important component in the astrobiology community. Fractal dimension via the Higuchi fractal method and Shannon entropy of the DNA sequence classification could be used in a diagram that serves as a simple summary. Results from biomedical gene research provide examples on the diagram methodology. Comparisons between biomedical genes such as EEF2 (elongation factor 2 human, mouse, etc), WDR85 in epigenetics, HAR1 in human specificity, DLG1 in cognitive skill, and HLA-C in mosquito bite immunology with EF Tu DNA sequences have accounted for the reported circular dichroism thermo-stability data systematically; the results also infer a relatively less volatility geologic time period from 2 to 3 Gyr from adaptation viewpoint. Comparison to Thermotoga maritima MSB8 and Psychrobacter shows that Thermus thermophilus HB8 EF-Tu calibration sequence could be an outlier, consistent with free energy calculation by NUPACK. Diagram methodology allows computer simulation studies and HAR1 shows about 0.5% probability from chimp to human in terms of diagram location, and SNP simulation results such as amoebic meningoencephalitis NAF1 suggest correlation. Extensions to the studies of the translation and transcription elongation factor sequences in Megavirus Chiliensis, Megavirus Lba and Pandoravirus show that the studied Pandoravirus sequence could be an outlier with the highest fractal dimension and lowest entropy, as compared to chicken as a deviant in the DNMT3A DNA methylation gene sequences from zebrafish to human and to the less than one percent probability in computer simulation using the HAR1 0.5% probability as reference. The diagram methodology would be useful in ancestral gene reconstruction studies in astrobiology and also be applicable to the study of point mutation in conformational thermostabilization research with Synchrotron based X-ray data for drug applications such as Parkinson's disease.

  2. Magnetic flux density measurement with balanced steady state free precession pulse sequence for MREIT: a simulation study.

    PubMed

    Minhas, Atul S; Woo, Eung Je; Lee, Soo Yeol

    2009-01-01

    Magnetic Resonance Electrical Impedance Tomography (MREIT) utilizes the magnetic flux density B(z), generated due to current injection, to find conductivity distribution inside an object. This B(z) can be measured from MR phase images using spin echo pulse sequence. The SNR of B(z) and the sensitivity of phase produced by B(z) in MR phase image are critical in deciding the resolution of MREIT conductivity images. The conventional spin echo based data acquisition has poor phase sensitivity to current injection. Longer scan time is needed to acquire data with higher SNR. We propose a balanced steady state free precession (b-SSFP) based pulse sequence which is highly sensitive to small off-resonance phase changes. A procedure to reconstruct B(z) from MR signal obtained with b-SSFP sequence is described. Phases for b-SSFP signals for two conductivity phantoms of TX 151 and Gelatin are simulated from the mathematical models of b-SSFP signal. It was observed that the phase changes obtained from b-SSFP pulse sequence are highly sensitive to current injection and hence would produce higher magnetic flux density. However, the b-SSFP signal is dependent on magnetic field inhomogeneity and the signal deteriorated highly for small offset from resonance frequency. The simulation results show that the b-SSFP sequence can be utilized for conductivity imaging of a local region where magnetic field inhomogeneity is small. A proper shimming of magnet is recommended before using the b-SSFP sequence.

  3. Noncontrast peripheral MRA with spiral echo train imaging.

    PubMed

    Fielden, Samuel W; Mugler, John P; Hagspiel, Klaus D; Norton, Patrick T; Kramer, Christopher M; Meyer, Craig H

    2015-03-01

    To develop a spin echo train sequence with spiral readout gradients with improved artery-vein contrast for noncontrast angiography. Venous T2 becomes shorter as the echo spacing is increased in echo train sequences, improving contrast. Spiral acquisitions, due to their data collection efficiency, facilitate long echo spacings without increasing scan times. Bloch equation simulations were performed to determine optimal sequence parameters, and the sequence was applied in five volunteers. In two volunteers, the sequence was performed with a range of echo times and echo spacings to compare with the theoretical contrast behavior. A Cartesian version of the sequence was used to compare contrast appearance with the spiral sequence. Additionally, spiral parallel imaging was optionally used to improve image resolution. In vivo, artery-vein contrast properties followed the general shape predicted by simulations, and good results were obtained in all stations. Compared with a Cartesian implementation, the spiral sequence had superior artery-vein contrast, better spatial resolution (1.2 mm(2) versus 1.5 mm(2) ), and was acquired in less time (1.4 min versus 7.5 min). The spiral spin echo train sequence can be used for flow-independent angiography to generate three-dimensional angiograms of the periphery quickly and without the use of contrast agents. © 2014 Wiley Periodicals, Inc.

  4. Three speech sounds, one motor action: evidence for speech-motor disparity from English flap production.

    PubMed

    Derrick, Donald; Stavness, Ian; Gick, Bryan

    2015-03-01

    The assumption that units of speech production bear a one-to-one relationship to speech motor actions pervades otherwise widely varying theories of speech motor behavior. This speech production and simulation study demonstrates that commonly occurring flap sequences may violate this assumption. In the word "Saturday," a sequence of three sounds may be produced using a single, cyclic motor action. Under this view, the initial upward tongue tip motion, starting with the first vowel and moving to contact the hard palate on the way to a retroflex position, is under active muscular control, while the downward movement of the tongue tip, including the second contact with the hard palate, results from gravity and elasticity during tongue muscle relaxation. This sequence is reproduced using a three-dimensional computer simulation of human vocal tract biomechanics and differs greatly from other observed sequences for the same word, which employ multiple targeted speech motor actions. This outcome suggests that a goal of a speaker is to produce an entire sequence in a biomechanically efficient way at the expense of maintaining parity within the individual parts of the sequence.

  5. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  6. Analysis of spring-in in U-shaped composite laminates: Numerical and experimental results

    NASA Astrophysics Data System (ADS)

    Bellini, Costanzo; Sorrentino, Luca; Polini, Wilma; Parodo, Gianluca

    2018-05-01

    The phenomena that happen during the cure process of a composite material laminate are responsible for the rise of residual stresses and, consequently, for the deformation at the end of the manufacturing process. The most analyzed deformation is the spring-in, that represent the flange-to-flange angle deviance from the theoretical value. In this work, the influence of some parameters, such as the laminate thickness, the stacking sequence and the mold radius, on the spring-in angle of a U-shaped laminate was studied exploring a full factorial plan through numerical simulations. First of all, a numerical model proper for cure simulation was introduced and its suitability to simulate the deformation behavior was demonstrated. As a result, only the stacking sequence influenced the spring-in value, while the effect of the tool radius and laminate thickness was minimal.

  7. Pyvolve: A Flexible Python Module for Simulating Sequences along Phylogenies.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2015-01-01

    We introduce Pyvolve, a flexible Python module for simulating genetic data along a phylogeny using continuous-time Markov models of sequence evolution. Easily incorporated into Python bioinformatics pipelines, Pyvolve can simulate sequences according to most standard models of nucleotide, amino-acid, and codon sequence evolution. All model parameters are fully customizable. Users can additionally specify custom evolutionary models, with custom rate matrices and/or states to evolve. This flexibility makes Pyvolve a convenient framework not only for simulating sequences under a wide variety of conditions, but also for developing and testing new evolutionary models. Pyvolve is an open-source project under a FreeBSD license, and it is available for download, along with a detailed user-manual and example scripts, from http://github.com/sjspielman/pyvolve.

  8. Task sequence planning in a robot workcell using AND/OR nets

    NASA Technical Reports Server (NTRS)

    Cao, Tiehua; Sanderson, Arthur C.

    1991-01-01

    An approach to task sequence planning for a generalized robotic manufacturing or material handling workcell is described. Given the descriptions of the objects in this system and all feasible geometric relationships among these objects, an AND/OR net which describes the relationships of all feasible geometric states and associated feasibility criteria for net transitions is generated. This AND/OR net is mapped into a Petri net which incorporates all feasible sequences of operations. The resulting Petri net is shown to be bounded and have guaranteed properties of liveness, safeness, and reversibility. Sequences are found from the reachability tree of the Petri net. Feasibility criteria for net transitions may be used to generate an extended Petri net representation of lower level command sequences. The resulting Petri net representation may be used for on-line scheduling and control of the system of feasible sequences. A simulation example of the sequences is described.

  9. Coarse-grained sequences for protein folding and design.

    PubMed

    Brown, Scott; Fawzi, Nicolas J; Head-Gordon, Teresa

    2003-09-16

    We present the results of sequence design on our off-lattice minimalist model in which no specification of native-state tertiary contacts is needed. We start with a sequence that adopts a target topology and build on it through sequence mutation to produce new sequences that comprise distinct members within a target fold class. In this work, we use the alpha/beta ubiquitin fold class and design two new sequences that, when characterized through folding simulations, reproduce the differences in folding mechanism seen experimentally for proteins L and G. The primary implication of this work is that patterning of hydrophobic and hydrophilic residues is the physical origin for the success of relative contact-order descriptions of folding, and that these physics-based potentials provide a predictive connection between free energy landscapes and amino acid sequence (the original protein folding problem). We present results of the sequence mapping from a 20- to the three-letter code for determining a sequence that folds into the WW domain topology to illustrate future extensions to protein design.

  10. Coarse-grained sequences for protein folding and design

    PubMed Central

    Brown, Scott; Fawzi, Nicolas J.; Head-Gordon, Teresa

    2003-01-01

    We present the results of sequence design on our off-lattice minimalist model in which no specification of native-state tertiary contacts is needed. We start with a sequence that adopts a target topology and build on it through sequence mutation to produce new sequences that comprise distinct members within a target fold class. In this work, we use the α/β ubiquitin fold class and design two new sequences that, when characterized through folding simulations, reproduce the differences in folding mechanism seen experimentally for proteins L and G. The primary implication of this work is that patterning of hydrophobic and hydrophilic residues is the physical origin for the success of relative contact-order descriptions of folding, and that these physics-based potentials provide a predictive connection between free energy landscapes and amino acid sequence (the original protein folding problem). We present results of the sequence mapping from a 20- to the three-letter code for determining a sequence that folds into the WW domain topology to illustrate future extensions to protein design. PMID:12963815

  11. Contribution of energy values to the analysis of global searching molecular dynamics simulations of transmembrane helical bundles.

    PubMed Central

    Torres, Jaume; Briggs, John A G; Arkin, Isaiah T

    2002-01-01

    Molecular interactions between transmembrane alpha-helices can be explored using global searching molecular dynamics simulations (GSMDS), a method that produces a group of probable low energy structures. We have shown previously that the correct model in various homooligomers is always located at the bottom of one of various possible energy basins. Unfortunately, the correct model is not necessarily the one with the lowest energy according to the computational protocol, which has resulted in overlooking of this parameter in favor of experimental data. In an attempt to use energetic considerations in the aforementioned analysis, we used global searching molecular dynamics simulations on three homooligomers of different sizes, the structures of which are known. As expected, our results show that even when the conformational space searched includes the correct structure, taking together simulations using both left and right handedness, the correct model does not necessarily have the lowest energy. However, for the models derived from the simulation that uses the correct handedness, the lowest energy model is always at, or very close to, the correct orientation. We hypothesize that this should also be true when simulations are performed using homologous sequences, and consequently lowest energy models with the right handedness should produce a cluster around a certain orientation. In contrast, using the wrong handedness the lowest energy structures for each sequence should appear at many different orientations. The rationale behind this is that, although more than one energy basin may exist, basins that do not contain the correct model will shift or disappear because they will be destabilized by at least one conservative (i.e. silent) mutation, whereas the basin containing the correct model will remain. This not only allows one to point to the possible handedness of the bundle, but can be used to overcome ambiguities arising from the use of homologous sequences in the analysis of global searching molecular dynamics simulations. In addition, because clustering of lowest energy models arising from homologous sequences only happens when the estimation of the helix tilt is correct, it may provide a validation for the helix tilt estimate. PMID:12023229

  12. Loop propensity of the sequence YKGQP from staphylococcal nuclease: implications for the folding of nuclease.

    PubMed

    Patel, Sunita; Sasidhar, Yellamraju U

    2007-10-01

    Recently we performed molecular dynamics (MD) simulations on the folding of the hairpin peptide DTVKLMYKGQPMTFR from staphylococcal nuclease in explicit water. We found that the peptide folds into a hairpin conformation with native and nonnative hydrogen-bonding patterns. In all the folding events observed in the folding of the hairpin peptide, loop formation involving the region YKGQP was an important event. In order to trace the origins of the loop propensity of the sequence YKGQP, we performed MD simulations on the sequence starting from extended, polyproline II and native type I' turn conformations for a total simulation length of 300 ns, using the GROMOS96 force field under constant volume and temperature (NVT) conditions. The free-energy landscape of the peptide YKGQP shows minima corresponding to loop conformation with Tyr and Pro side-chain association, turn and extended conformational forms, with modest free-energy barriers separating the minima. To elucidate the role of Gly in facilitating loop formation, we also performed MD simulations of the mutated peptide YKAQP (Gly --> Ala mutation) under similar conditions starting from polyproline II conformation for 100 ns. Two minima corresponding to bend/turn and extended conformations were observed in the free-energy landscape for the peptide YKAQP. The free-energy barrier between the minima in the free-energy landscape of the peptide YKAQP was also modest. Loop conformation is largely sampled by the YKGQP peptide, while extended conformation is largely sampled by the YKAQP peptide. We also explain why the YKGQP sequence samples type II turn conformation in these simulations, whereas the sequence as part of the hairpin peptide DTVKLMYKGQPMTFR samples type I' turn conformation both in the X-ray crystal structure and in our earlier simulations on the folding of the hairpin peptide. We discuss the implications of our results to the folding of the staphylococcal nuclease. Copyright (c) 2007 European Peptide Society and John Wiley & Sons, Ltd.

  13. Finite element modeling of ROPS in static testing and rear overturns.

    PubMed

    Harris, J R; Mucino, V H; Etherton, J R; Snyder, K A; Means, K H

    2000-08-01

    Even with the technological advances of the last several decades, agricultural production remains one of the most hazardous occupations in the United States. Death due to tractor rollover is a prime contributor to this hazard. Standards for rollover protective structures (ROPS) performance and certification have been developed by groups such as the Society of Automotive Engineers (SAE) and the American Society of Agricultural Engineers (ASAE) to combat these problems. The current ROPS certification standard, SAE J2194, requires either a dynamic or static testing sequence or both. Although some ROPS manufacturers perform both the dynamic and static phases of SAE J2194 testing, it is possible for a ROPS to be certified for field operation using static testing alone. This research compared ROPS deformation response from a simulated SAE J2194 static loading sequence to ROPS deformation response as a result of a simulated rearward tractor rollover. Finite element analysis techniques for plastic deformation were used to simulate both the static and dynamic rear rollover scenarios. Stress results from the rear rollover model were compared to results from simulated static testing per SAE J2194. Maximum stress values from simulated rear rollovers exceeded maximum stress values recorded during simulated static testing for half of the elements comprising the uprights. In the worst case, the static model underpredicts dynamic model results by approximately 7%. In the best case, the static model overpredicts dynamic model results by approximately 32%. These results suggest the need for additional experimental work to characterize ROPS stress levels during staged overturns and during testing according to the SAE standard.

  14. Real-time film recording from stroke-written CRT's

    NASA Technical Reports Server (NTRS)

    Hunt, R.; Grunwald, A. J.

    1980-01-01

    Real-time simulation studies often require motion-picture recording of events directly from stroke written cathode-ray tubes (CRT's). Difficulty presented is prevention of "flicker," which results from lack of synchronization between display sequence on CRT and shutter motion of camera. Programmable method has been devised for phasing display sequence to shutter motion, ensuring flicker-free recordings.

  15. Locating Sequence on FPC Maps and Selecting a Minimal Tiling Path

    PubMed Central

    Engler, Friedrich W.; Hatfield, James; Nelson, William; Soderlund, Carol A.

    2003-01-01

    This study discusses three software tools, the first two aid in integrating sequence with an FPC physical map and the third automatically selects a minimal tiling path given genomic draft sequence and BAC end sequences. The first tool, FSD (FPC Simulated Digest), takes a sequenced clone and adds it back to the map based on a fingerprint generated by an in silico digest of the clone. This allows verification of sequenced clone positions and the integration of sequenced clones that were not originally part of the FPC map. The second tool, BSS (Blast Some Sequence), takes a query sequence and positions it on the map based on sequence associated with the clones in the map. BSS has multiple uses as follows: (1) When the query is a file of marker sequences, they can be added as electronic markers. (2) When the query is draft sequence, the results of BSS can be used to close gaps in a sequenced clone or the physical map. (3) When the query is a sequenced clone and the target is BAC end sequences, one may select the next clone for sequencing using both sequence comparison results and map location. (4) When the query is whole-genome draft sequence and the target is BAC end sequences, the results can be used to select many clones for a minimal tiling path at once. The third tool, pickMTP, automates the majority of this last usage of BSS. Results are presented using the rice FPC map, BAC end sequences, and whole-genome shotgun from Syngenta. PMID:12915486

  16. A better sequence-read simulator program for metagenomics.

    PubMed

    Johnson, Stephen; Trost, Brett; Long, Jeffrey R; Pittet, Vanessa; Kusalik, Anthony

    2014-01-01

    There are many programs available for generating simulated whole-genome shotgun sequence reads. The data generated by many of these programs follow predefined models, which limits their use to the authors' original intentions. For example, many models assume that read lengths follow a uniform or normal distribution. Other programs generate models from actual sequencing data, but are limited to reads from single-genome studies. To our knowledge, there are no programs that allow a user to generate simulated data following non-parametric read-length distributions and quality profiles based on empirically-derived information from metagenomics sequencing data. We present BEAR (Better Emulation for Artificial Reads), a program that uses a machine-learning approach to generate reads with lengths and quality values that closely match empirically-derived distributions. BEAR can emulate reads from various sequencing platforms, including Illumina, 454, and Ion Torrent. BEAR requires minimal user input, as it automatically determines appropriate parameter settings from user-supplied data. BEAR also uses a unique method for deriving run-specific error rates, and extracts useful statistics from the metagenomic data itself, such as quality-error models. Many existing simulators are specific to a particular sequencing technology; however, BEAR is not restricted in this way. Because of its flexibility, BEAR is particularly useful for emulating the behaviour of technologies like Ion Torrent, for which no dedicated sequencing simulators are currently available. BEAR is also the first metagenomic sequencing simulator program that automates the process of generating abundances, which can be an arduous task. BEAR is useful for evaluating data processing tools in genomics. It has many advantages over existing comparable software, such as generating more realistic reads and being independent of sequencing technology, and has features particularly useful for metagenomics work.

  17. Performing SELEX experiments in silico

    NASA Astrophysics Data System (ADS)

    Wondergem, J. A. J.; Schiessel, H.; Tompitak, M.

    2017-11-01

    Due to the sequence-dependent nature of the elasticity of DNA, many protein-DNA complexes and other systems in which DNA molecules must be deformed have preferences for the type of DNA sequence they interact with. SELEX (Systematic Evolution of Ligands by EXponential enrichment) experiments and similar sequence selection experiments have been used extensively to examine the (indirect readout) sequence preferences of, e.g., nucleosomes (protein spools around which DNA is wound for compactification) and DNA rings. We show how recently developed computational and theoretical tools can be used to emulate such experiments in silico. Opening up this possibility comes with several benefits. First, it allows us a better understanding of our models and systems, specifically about the roles played by the simulation temperature and the selection pressure on the sequences. Second, it allows us to compare the predictions made by the model of choice with experimental results. We find agreement on important features between predictions of the rigid base-pair model and experimental results for DNA rings and interesting differences that point out open questions in the field. Finally, our simulations allow application of the SELEX methodology to systems that are experimentally difficult to realize because they come with high energetic costs and are therefore unlikely to form spontaneously, such as very short or overwound DNA rings.

  18. Effect of sequence and stereochemistry reversal on p53 peptide mimicry.

    PubMed

    Atzori, Alessio; Baker, Audrey E; Chiu, Mark; Bryce, Richard A; Bonnet, Pascal

    2013-01-01

    Peptidomimetics effective in modulating protein-protein interactions and resistant to proteolysis have potential in therapeutic applications. An appealing yet underperforming peptidomimetic strategy is to employ D-amino acids and reversed sequences to mimic a lead peptide conformation, either separately or as the combined retro-inverso peptide. In this work, we examine the conformations of inverse, reverse and retro-inverso peptides of p53(15-29) using implicit solvent molecular dynamics simulation and circular dichroism spectroscopy. In order to obtain converged ensembles for the peptides, we find enhanced sampling is required via the replica exchange molecular dynamics method. From these replica exchange simulations, the D-peptide analogues of p53(15-29) result in a predominantly left-handed helical conformation. When the parent sequence is reversed sequence as either the L-peptide and D-peptide, these peptides display a greater helical propensity, feature reflected by NMR and CD studies in TFE/water solvent. The simulations also indicate that, while approximately similar orientations of the side-chains are possible by the peptide analogues, their ability to mimic the parent peptide is severely compromised by backbone orientation (for D-amino acids) and side-chain orientation (for reversed sequences). A retro-inverso peptide is disadvantaged as a mimic in both aspects, and further chemical modification is required to enable this concept to be used fruitfully in peptidomimetic design. The replica exchange molecular simulation approach adopted here, with its ability to provide detailed conformational insights into modified peptides, has potential as a tool to guide structure-based design of new improved peptidomimetics.

  19. Implementation into earthquake sequence simulations of a rate- and state-dependent friction law incorporating pressure solution creep

    NASA Astrophysics Data System (ADS)

    Noda, H.

    2016-05-01

    Pressure solution creep (PSC) is an important elementary process in rock friction at high temperatures where solubilities of rock-forming minerals are significantly large. It significantly changes the frictional resistance and enhances time-dependent strengthening. A recent microphysical model for PSC-involved friction of clay-quartz mixtures, which can explain a transition between dilatant and non-dilatant deformation (d-nd transition), was modified here and implemented in dynamic earthquake sequence simulations. The original model resulted in essentially a kind of rate- and state-dependent friction (RSF) law, but assumed a constant friction coefficient for clay resulting in zero instantaneous rate dependency in the dilatant regime. In this study, an instantaneous rate dependency for the clay friction coefficient was introduced, consistent with experiments, resulting in a friction law suitable for earthquake sequence simulations. In addition, a term for time-dependent strengthening due to PSC was added which makes the friction law logarithmically rate-weakening in the dilatant regime. The width of the zone in which clasts overlap or, equivalently, the interface porosity involved in PSC plays a role as the state variable. Such a concrete physical meaning of the state variable is a great advantage in future modelling studies incorporating other physical processes such as hydraulic effects. Earthquake sequence simulations with different pore pressure distributions demonstrated that excess pore pressure at depth causes deeper rupture propagation with smaller slip per event and a shorter recurrence interval. The simulated ruptures were arrested a few kilometres below the point of pre-seismic peak stress at the d-nd transition and did not propagate spontaneously into the region of pre-seismic non-dilatant deformation. PSC weakens the fault against slow deformation and thus such a region cannot produce a dynamic stress drop. Dynamic rupture propagation further down to brittle-plastic transition, evidenced by geological observations, would require even smaller frictional resistance at coseismic slip rate, suggesting the importance of implementation of dynamic weakening activated at coseismic slip rates for more realistic simulation of earthquake sequences. The present models produced much smaller afterslip at deeper parts of arrested ruptures than those with logarithmic RSF laws because of a more significant rate-strengthening effect due to linearly viscous PSC. Detailed investigation of afterslip would give a clue to understand the deformation mechanism which controls shear resistance of the fault in a region of arrest of earthquake ruptures.

  20. Simulating secondary waterflooding in heterogeneous rocks with variable wettability using an image-based, multiscale pore network model

    NASA Astrophysics Data System (ADS)

    Bultreys, Tom; Van Hoorebeke, Luc; Cnudde, Veerle

    2016-09-01

    The two-phase flow properties of natural rocks depend strongly on their pore structure and wettability, both of which are often heterogeneous throughout the rock. To better understand and predict these properties, image-based models are being developed. Resulting simulations are however problematic in several important classes of rocks with broad pore-size distributions. We present a new multiscale pore network model to simulate secondary waterflooding in these rocks, which may undergo wettability alteration after primary drainage. This novel approach permits to include the effect of microporosity on the imbibition sequence without the need to describe each individual micropore. Instead, we show that fluid transport through unresolved pores can be taken into account in an upscaled fashion, by the inclusion of symbolic links between macropores, resulting in strongly decreased computational demands. Rules to describe the behavior of these links in the quasistatic invasion sequence are derived from percolation theory. The model is validated by comparison to a fully detailed network representation, which takes each separate micropore into account. Strongly and weakly water-and oil-wet simulations show good results, as do mixed-wettability scenarios with different pore-scale wettability distributions. We also show simulations on a network extracted from a micro-CT scan of Estaillades limestone, which yields good agreement with water-wet and mixed-wet experimental results.

  1. Assessing the accuracy of using oscillating gradient spin echo sequences with AxCaliber to infer micron-sized axon diameters.

    PubMed

    Mercredi, Morgan; Vincent, Trevor J; Bidinosti, Christopher P; Martin, Melanie

    2017-02-01

    Current magnetic resonance imaging (MRI) axon diameter measurements rely on the pulsed gradient spin-echo sequence, which is unable to provide diffusion times short enough to measure small axon diameters. This study combines the AxCaliber axon diameter fitting method with data generated from Monte Carlo simulations of oscillating gradient spin-echo sequences (OGSE) to infer micron-sized axon diameters, in order to determine the feasibility of using MRI to infer smaller axon diameters in brain tissue. Monte Carlo computer simulation data were synthesized from tissue geometries of cylinders of different diameters using a range of gradient frequencies in the cosine OGSE sequence . Data were fitted to the AxCaliber method modified to allow the new pulse sequence. Intra- and extra-axonal water were studied separately and together. The simulations revealed the extra-axonal model to be problematic. Rather than change the model, we found that restricting the range of gradient frequencies such that the measured apparent diffusion coefficient was constant over that range resulted in more accurate fitted diameters. Thus a careful selection of frequency ranges is needed for the AxCaliber method to correctly model extra-axonal water, or adaptations to the method are needed. This restriction helped reduce the necessary gradient strengths for measurements that could be performed with parameters feasible for a Bruker BG6 gradient set. For these experiments, the simulations inferred diameters as small as 0.5 μm on square-packed and randomly packed cylinders. The accuracy of the inferred diameters was found to be dependent on the signal-to-noise ratio (SNR), with smaller diameters more affected by noise, although all diameter distributions were distinguishable from one another for all SNRs tested. The results of this study indicate the feasibility of using MRI with OGSE on preclinical scanners to infer small axon diameters.

  2. SU-E-J-90: MRI-Based Treatment Simulation and Patient Setup for Radiation Therapy of Brain Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y; Cao, M; Han, F

    2014-06-01

    Purpose: Traditional radiation therapy of cancer is heavily dependent on CT. CT provides excellent depiction of the bones but lacks good soft tissue contrast, which makes contouring difficult. Often, MRIs are fused with CT to take advantage of its superior soft tissue contrast. Such an approach has drawbacks. It is desirable to perform treatment simulation entirely based on MRI. To achieve MR-based simulation for radiation therapy, bone imaging is an important challenge because of the low MR signal intensity from bone due to its ultra-short T2 and T1, which presents difficulty for both dose calculation and patient setup in termsmore » of digitally reconstructed radiograph (DRR) generation. Current solutions will either require manual bone contouring or multiple MR scans. We present a technique to generate DRR using MRI with an Ultra Short Echo Time (UTE) sequence which is applicable to both OBI and ExacTrac 2D patient setup. Methods: Seven brain cancer patients were scanned at 1.5 Tesla using a radial UTE sequence. The sequence acquires two images at two different echo times. The two images were processed using in-house software. The resultant bone images were subsequently loaded into commercial systems to generate DRRs. Simulation and patient clinical on-board images were used to evaluate 2D patient setup with MRI-DRRs. Results: The majority bones are well visualized in all patients. The fused image of patient CT with the MR bone image demonstrates the accuracy of automatic bone identification using our technique. The generated DRR is of good quality. Accuracy of 2D patient setup by using MRI-DRR is comparable to CT-based 2D patient setup. Conclusion: This study shows the potential of DRR generation with single MR sequence. Further work will be needed on MR sequence development and post-processing procedure to achieve robust MR bone imaging for other human sites in addition to brain.« less

  3. Breaking Lander-Waterman’s Coverage Bound

    PubMed Central

    Nashta-ali, Damoun; Motahari, Seyed Abolfazl; Hosseinkhalaj, Babak

    2016-01-01

    Lander-Waterman’s coverage bound establishes the total number of reads required to cover the whole genome of size G bases. In fact, their bound is a direct consequence of the well-known solution to the coupon collector’s problem which proves that for such genome, the total number of bases to be sequenced should be O(G ln G). Although the result leads to a tight bound, it is based on a tacit assumption that the set of reads are first collected through a sequencing process and then are processed through a computation process, i.e., there are two different machines: one for sequencing and one for processing. In this paper, we present a significant improvement compared to Lander-Waterman’s result and prove that by combining the sequencing and computing processes, one can re-sequence the whole genome with as low as O(G) sequenced bases in total. Our approach also dramatically reduces the required computational power for the combined process. Simulation results are performed on real genomes with different sequencing error rates. The results support our theory predicting the log G improvement on coverage bound and corresponding reduction in the total number of bases required to be sequenced. PMID:27806058

  4. A comprehensive evaluation of assembly scaffolding tools

    PubMed Central

    2014-01-01

    Background Genome assembly is typically a two-stage process: contig assembly followed by the use of paired sequencing reads to join contigs into scaffolds. Scaffolds are usually the focus of reported assembly statistics; longer scaffolds greatly facilitate the use of genome sequences in downstream analyses, and it is appealing to present larger numbers as metrics of assembly performance. However, scaffolds are highly prone to errors, especially when generated using short reads, which can directly result in inflated assembly statistics. Results Here we provide the first independent evaluation of scaffolding tools for second-generation sequencing data. We find large variations in the quality of results depending on the tool and dataset used. Even extremely simple test cases of perfect input, constructed to elucidate the behaviour of each algorithm, produced some surprising results. We further dissect the performance of the scaffolders using real and simulated sequencing data derived from the genomes of Staphylococcus aureus, Rhodobacter sphaeroides, Plasmodium falciparum and Homo sapiens. The results from simulated data are of high quality, with several of the tools producing perfect output. However, at least 10% of joins remains unidentified when using real data. Conclusions The scaffolders vary in their usability, speed and number of correct and missed joins made between contigs. Results from real data highlight opportunities for further improvements of the tools. Overall, SGA, SOPRA and SSPACE generally outperform the other tools on our datasets. However, the quality of the results is highly dependent on the read mapper and genome complexity. PMID:24581555

  5. Dynamical System Modeling to Simulate Donor T Cell Response to Whole Exome Sequencing-Derived Recipient Peptides Demonstrates Different Alloreactivity Potential in HLA-Matched and -Mismatched Donor-Recipient Pairs.

    PubMed

    Abdul Razzaq, Badar; Scalora, Allison; Koparde, Vishal N; Meier, Jeremy; Mahmood, Musa; Salman, Salman; Jameson-Lee, Max; Serrano, Myrna G; Sheth, Nihar; Voelkner, Mark; Kobulnicky, David J; Roberts, Catherine H; Ferreira-Gonzalez, Andrea; Manjili, Masoud H; Buck, Gregory A; Neale, Michael C; Toor, Amir A

    2016-05-01

    Immune reconstitution kinetics and subsequent clinical outcomes in HLA-matched recipients of allogeneic stem cell transplantation (SCT) are variable and difficult to predict. Considering SCT as a dynamical system may allow sequence differences across the exomes of the transplant donors and recipients to be used to simulate an alloreactive T cell response, which may allow better clinical outcome prediction. To accomplish this, whole exome sequencing was performed on 34 HLA-matched SCT donor-recipient pairs (DRPs) and the nucleotide sequence differences translated to peptides. The binding affinity of the peptides to the relevant HLA in each DRP was determined. The resulting array of peptide-HLA binding affinity values in each patient was considered as an operator modifying a hypothetical T cell repertoire vector, in which each T cell clone proliferates in accordance with the logistic equation of growth. Using an iterating system of matrices, each simulated T cell clone's growth was calculated with the steady-state population being proportional to the magnitude of the binding affinity of the driving HLA-peptide complex. Incorporating competition between T cell clones responding to different HLA-peptide complexes reproduces a number of features of clinically observed T cell clonal repertoire in the simulated repertoire, including sigmoidal growth kinetics of individual T cell clones and overall repertoire, Power Law clonal frequency distribution, increase in repertoire complexity over time with increasing clonal diversity, and alteration of clonal dominance when a different antigen array is encountered, such as in SCT. The simulated, alloreactive T cell repertoire was markedly different in HLA-matched DRPs. The patterns were differentiated by rate of growth and steady-state magnitude of the simulated T cell repertoire and demonstrate a possible correlation with survival. In conclusion, exome wide sequence differences in DRPs may allow simulation of donor alloreactive T cell response to recipient antigens and may provide a quantitative basis for refining donor selection and titration of immunosuppression after SCT. Copyright © 2016 American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  6. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    PubMed

    Kelly, Steven; Maini, Philip K

    2013-01-01

    The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  7. The Use of Behavioral Skills Training and in situ Feedback to Protect Children with Autism from Abduction Lures

    ERIC Educational Resources Information Center

    Gunby, Kristin V.; Rapp, John T.

    2014-01-01

    We examined the effects of behavioral skills training with in situ feedback on safe responding by children with autism to abduction lures that were presented after a high-probability (high-p) request sequence. This sequence was intended to simulate a grooming or recruitment process. Results show that all 3 participants ultimately acquired the…

  8. A control strategy for grid-side converter of DFIG under unbalanced condition based on Dig SILENT/Power Factory

    NASA Astrophysics Data System (ADS)

    Han, Pingping; Zhang, Haitian; Chen, Lingqi; Zhang, Xiaoan

    2018-01-01

    The models of doubly fed induction generator (DFIG) and its grid-side converter (GSC) are established under unbalanced grid condition based on DIgSILENT/PowerFactory. According to the mathematical model, the vector equations of positive and negative sequence voltage and current are deduced in the positive sequence synchronous rotating reference frame d-q-0 when the characteristics of the simulation software are considered adequately. Moreover, the reference value of current component of GSC in the positive sequence frame d-q-0 under unbalanced condition can be obtained to improve the traditional control of GSC when the national issue of unbalanced current limits is combined. The simulated results indicate that the control strategy can restrain negative sequence current and the two times frequency power wave of GSC’s ac side effectively. The voltage of DC bus can be maintained a constant to ensure the uninterrupted operation of DFIG under unbalanced grid condition eventually.

  9. Structure stability of lytic peptides during their interactions with lipid bilayers.

    PubMed

    Chen, H M; Lee, C H

    2001-10-01

    In this work, molecular dynamics simulations were used to examine the consequences of a variety of analogs of cecropin A on lipid bilayers. Analog sequences were constructed by replacing either the N- or C-terminal helix with the other helix in native or reverse sequence order, by making palindromic peptides based on both the N- and C-terminal helices, and by deleting the hinge region. The structure of the peptides was monitored throughout the simulation. The hinge region appeared not to assist in maintaining helical structure but help in motion flexibility. In general, the N-terminal helix of peptides was less stable than the C-terminal one during the interaction with anionic lipid bilayers. Sequences with hydrophobic helices tended to regain helical structure after an initial loss while sequences with amphipathic helices were less able to do this. The results suggests that hydrophobic design peptides have a high structural stability in an anionic membrane and are the candidates for experimental investigation.

  10. Local alignment of two-base encoded DNA sequence

    PubMed Central

    Homer, Nils; Merriman, Barry; Nelson, Stanley F

    2009-01-01

    Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732

  11. Differential evolution-simulated annealing for multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.

    2017-10-01

    Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.

  12. Efficient experimental design and analysis strategies for the detection of differential expression using RNA-Sequencing

    PubMed Central

    2012-01-01

    Background RNA sequencing (RNA-Seq) has emerged as a powerful approach for the detection of differential gene expression with both high-throughput and high resolution capabilities possible depending upon the experimental design chosen. Multiplex experimental designs are now readily available, these can be utilised to increase the numbers of samples or replicates profiled at the cost of decreased sequencing depth generated per sample. These strategies impact on the power of the approach to accurately identify differential expression. This study presents a detailed analysis of the power to detect differential expression in a range of scenarios including simulated null and differential expression distributions with varying numbers of biological or technical replicates, sequencing depths and analysis methods. Results Differential and non-differential expression datasets were simulated using a combination of negative binomial and exponential distributions derived from real RNA-Seq data. These datasets were used to evaluate the performance of three commonly used differential expression analysis algorithms and to quantify the changes in power with respect to true and false positive rates when simulating variations in sequencing depth, biological replication and multiplex experimental design choices. Conclusions This work quantitatively explores comparisons between contemporary analysis tools and experimental design choices for the detection of differential expression using RNA-Seq. We found that the DESeq algorithm performs more conservatively than edgeR and NBPSeq. With regard to testing of various experimental designs, this work strongly suggests that greater power is gained through the use of biological replicates relative to library (technical) replicates and sequencing depth. Strikingly, sequencing depth could be reduced as low as 15% without substantial impacts on false positive or true positive rates. PMID:22985019

  13. Hydration properties of natural and synthetic DNA sequences with methylated adenine or cytosine bases in the R.DpnI target and BDNF promoter studied by molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Shanak, Siba; Helms, Volkhard

    2014-12-01

    Adenine and cytosine methylation are two important epigenetic modifications of DNA sequences at the levels of the genome and transcriptome. To characterize the differential roles of methylating adenine or cytosine with respect to their hydration properties, we performed conventional MD simulations and free energy perturbation calculations for two particular DNA sequences, namely the brain-derived neurotrophic factor (BDNF) promoter and the R.DpnI-bound DNA that are known to undergo methylation of C5-methyl cytosine and N6-methyl adenine, respectively. We found that a single methylated cytosine has a clearly favorable hydration free energy over cytosine since the attached methyl group has a slightly polar character. In contrast, capping the strongly polar N6 of adenine with a methyl group gives a slightly unfavorable contribution to its free energy of solvation. Performing the same demethylation in the context of a DNA double-strand gave quite similar results for the more solvent-accessible cytosine but much more unfavorable results for the rather buried adenine. Interestingly, the same demethylation reactions are far more unfavorable when performed in the context of the opposite (BDNF or R.DpnI target) sequence. This suggests a natural preference for methylation in a specific sequence context. In addition, free energy calculations for demethylating adenine or cytosine in the context of B-DNA vs. Z-DNA suggest that the conformational B-Z transition of DNA transition is rather a property of cytosine methylated sequences but is not preferable for the adenine-methylated sequences investigated here.

  14. Hydration properties of natural and synthetic DNA sequences with methylated adenine or cytosine bases in the R.DpnI target and BDNF promoter studied by molecular dynamics simulations.

    PubMed

    Shanak, Siba; Helms, Volkhard

    2014-12-14

    Adenine and cytosine methylation are two important epigenetic modifications of DNA sequences at the levels of the genome and transcriptome. To characterize the differential roles of methylating adenine or cytosine with respect to their hydration properties, we performed conventional MD simulations and free energy perturbation calculations for two particular DNA sequences, namely the brain-derived neurotrophic factor (BDNF) promoter and the R.DpnI-bound DNA that are known to undergo methylation of C5-methyl cytosine and N6-methyl adenine, respectively. We found that a single methylated cytosine has a clearly favorable hydration free energy over cytosine since the attached methyl group has a slightly polar character. In contrast, capping the strongly polar N6 of adenine with a methyl group gives a slightly unfavorable contribution to its free energy of solvation. Performing the same demethylation in the context of a DNA double-strand gave quite similar results for the more solvent-accessible cytosine but much more unfavorable results for the rather buried adenine. Interestingly, the same demethylation reactions are far more unfavorable when performed in the context of the opposite (BDNF or R.DpnI target) sequence. This suggests a natural preference for methylation in a specific sequence context. In addition, free energy calculations for demethylating adenine or cytosine in the context of B-DNA vs. Z-DNA suggest that the conformational B-Z transition of DNA transition is rather a property of cytosine methylated sequences but is not preferable for the adenine-methylated sequences investigated here.

  15. Aliphatic peptides show similar self-assembly to amyloid core sequences, challenging the importance of aromatic interactions in amyloidosis.

    PubMed

    Lakshmanan, Anupama; Cheong, Daniel W; Accardo, Angelo; Di Fabrizio, Enzo; Riekel, Christian; Hauser, Charlotte A E

    2013-01-08

    The self-assembly of abnormally folded proteins into amyloid fibrils is a hallmark of many debilitating diseases, from Alzheimer's and Parkinson diseases to prion-related disorders and diabetes type II. However, the fundamental mechanism of amyloid aggregation remains poorly understood. Core sequences of four to seven amino acids within natural amyloid proteins that form toxic fibrils have been used to study amyloidogenesis. We recently reported a class of systematically designed ultrasmall peptides that self-assemble in water into cross-β-type fibers. Here we compare the self-assembly of these peptides with natural core sequences. These include core segments from Alzheimer's amyloid-β, human amylin, and calcitonin. We analyzed the self-assembly process using circular dichroism, electron microscopy, X-ray diffraction, rheology, and molecular dynamics simulations. We found that the designed aliphatic peptides exhibited a similar self-assembly mechanism to several natural sequences, with formation of α-helical intermediates being a common feature. Interestingly, the self-assembly of a second core sequence from amyloid-β, containing the diphenylalanine motif, was distinctly different from all other examined sequences. The diphenylalanine-containing sequence formed β-sheet aggregates without going through the α-helical intermediate step, giving a unique fiber-diffraction pattern and simulation structure. Based on these results, we propose a simplified aliphatic model system to study amyloidosis. Our results provide vital insight into the nature of early intermediates formed and suggest that aromatic interactions are not as important in amyloid formation as previously postulated. This information is necessary for developing therapeutic drugs that inhibit and control amyloid formation.

  16. Factors That Affect Large Subunit Ribosomal DNA Amplicon Sequencing Studies of Fungal Communities: Classification Method, Primer Choice, and Error

    PubMed Central

    Porter, Teresita M.; Golding, G. Brian

    2012-01-01

    Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1) a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN); 2) a composition-based method (Ribosomal Database Project naïve Bayesian classifier, NBC); and, 3) a phylogeny-based method (Statistical Assignment Package, SAP). We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50–100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys. PMID:22558215

  17. Does the sequence of instruction matter during simulation?

    PubMed

    Stefaniak, Jill E; Turkelson, Carman L

    2014-02-01

    Instructional strategies must be balanced when subjecting students to full-immersion simulation so as not to discourage learning and increase cognitive overload. The purpose of this study was to determine if participating in a simulation exercise before lecture yielded better performance outcomes among novice learners. Twenty-nine participants were divided into 2 groups as follows: group 1 participated in simulation exercises followed by a didactic lecture and group 2 participated in the same learning activities presented in the opposite order. Participants were administered a multiple-choice cognitive assessment upon completion of a workshop. Learners who participated in the simulated exercises followed by the didactic lecture performed better on postassessments as compared with those who participated in the simulation after the lecture. A repeated-measures or nested analysis of variance generated statistically significant results in terms of model fit F (α=0.05; 4.54)=176.07 with a P<0.0001. Despite their higher levels of increased performance, 76% of those who participated in simulation activities first indicated that they would have preferred to participate in a lecture first. The findings of this study suggest that differences occur among learners when the sequencing of instructional components is altered. Learners who participated in simulation before lecture demonstrated increased knowledge compared with learners who participated in simulation after a lecture.

  18. Sequence dependency of canonical base pair opening in the DNA double helix

    PubMed Central

    Villa, Alessandra

    2017-01-01

    The flipping-out of a DNA base from the double helical structure is a key step of many cellular processes, such as DNA replication, modification and repair. Base pair opening is the first step of base flipping and the exact mechanism is still not well understood. We investigate sequence effects on base pair opening using extensive classical molecular dynamics simulations targeting the opening of 11 different canonical base pairs in two DNA sequences. Two popular biomolecular force fields are applied. To enhance sampling and calculate free energies, we bias the simulation along a simple distance coordinate using a newly developed adaptive sampling algorithm. The simulation is guided back and forth along the coordinate, allowing for multiple opening pathways. We compare the calculated free energies with those from an NMR study and check assumptions of the model used for interpreting the NMR data. Our results further show that the neighboring sequence is an important factor for the opening free energy, but also indicates that other sequence effects may play a role. All base pairs are observed to have a propensity for opening toward the major groove. The preferred opening base is cytosine for GC base pairs, while for AT there is sequence dependent competition between the two bases. For AT opening, we identify two non-canonical base pair interactions contributing to a local minimum in the free energy profile. For both AT and CG we observe long-lived interactions with water and with sodium ions at specific sites on the open base pair. PMID:28369121

  19. A Nonparametric Approach For Representing Interannual Dependence In Monthly Streamflow Sequences

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Oneill, R.

    The estimation of risks associated with water management plans requires generation of synthetic streamflow sequences. The mathematical algorithms used to generate these sequences at monthly time scales are found lacking in two main respects: inability in preserving dependence attributes particularly at large (seasonal to interannual) time lags; and, a poor representation of observed distributional characteristics, in partic- ular, representation of strong assymetry or multimodality in the probability density function. Proposed here is an alternative that naturally incorporates both observed de- pendence and distributional attributes in the generated sequences. Use of a nonpara- metric framework provides an effective means for representing the observed proba- bility distribution, while the use of a Svariable kernelT ensures accurate modeling of & cedil;streamflow data sets that contain a substantial number of zero flow values. A careful selection of prior flows imparts the appropriate short-term memory, while use of an SaggregateT flow variable allows representation of interannual dependence. The non- & cedil;parametric simulation model is applied to monthly flows from the Beaver River near Beaver, Utah, USA, and the Burrendong dam inflows, New South Wales, Australia. Results indicate that while the use of traditional simulation approaches leads to an inaccurate representation of dependence at long (annual and interannual) time scales, the proposed model can simulate both short and long-term dependence. As a result, the proposed model ensures a significantly improved representation of reservoir storage statistics, particularly for systems influenced by long droughts. It is important to note that the proposed method offers a simpler and better alternative to conventional dis- aggregation models as: (a) a separate annual flow series is not required, (b) stringent assumptions relating annual and monthly flows are not needed, and (c) the method does not require the specification of a "water year", instead ensuring that the sum of any sequence of flows lasting twelve months will result in the type of dependence that is observed in the historical annual flow series.

  20. Design and evaluation of an air traffic control Final Approach Spacing Tool

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Erzberger, Heinz; Green, Steven M.; Nedell, William

    1991-01-01

    This paper describes the design and simulator evaluation of an automation tool for assisting terminal radar approach controllers in sequencing and spacing traffic onto the final approach course. The automation tool, referred to as the Final Approach Spacing Tool (FAST), displays speed and heading advisories for arriving aircraft as well as sequencing information on the controller's radar display. The main functional elements of FAST are a scheduler that schedules and sequences the traffic, a four-dimensional trajectory synthesizer that generates the advisories, and a graphical interface that displays the information to the controller. FAST has been implemented on a high-performance workstation. It can be operated as a stand-alone in the terminal radar approach control facility or as an element of a system integrated with automation tools in the air route traffic control center. FAST was evaluated by experienced air traffic controllers in a real-time air traffic control simulation. simulation results summarized in the paper show that the automation tools significantly reduced controller work load and demonstrated a potential for an increase in landing rate.

  1. A novel adaptive needle insertion sequencing for robotic, single needle MR-guided high-dose-rate prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Borot de Battisti, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; Maenhout, M.; Moerland, M. A.

    2017-05-01

    MR-guided high-dose-rate (HDR) brachytherapy has gained increasing interest as a treatment for patients with localized prostate cancer because of the superior value of MRI for tumor and surrounding tissues localization. To enable needle insertion into the prostate with the patient in the MR bore, a single needle MR-compatible robotic system involving needle-by-needle dose delivery has been developed at our institution. Throughout the intervention, dose delivery may be impaired by: (1) sub-optimal needle positioning caused by e.g. needle bending, (2) intra-operative internal organ motion such as prostate rotations or swelling, or intra-procedural rectum or bladder filling. This may result in failure to reach clinical constraints. To assess the first aforementioned challenge, a recent study from our research group demonstrated that the deposited dose may be greatly improved by real-time adaptive planning with feedback on the actual needle positioning. However, the needle insertion sequence is left to the doctor and therefore, this may result in sub-optimal dose delivery. In this manuscript, a new method is proposed to determine and update automatically the needle insertion sequence. This strategy is based on the determination of the most sensitive needle track. The sensitivity of a needle track is defined as its impact on the dose distribution in case of sub-optimal positioning. A stochastic criterion is thus presented to determine each needle track sensitivity based on needle insertion simulations. To assess the proposed sequencing strategy, HDR prostate brachytherapy was simulated on 11 patients with varying number of needle insertions. Sub-optimal needle positioning was simulated at each insertion (modeled by typical random angulation errors). In 91% of the scenarios, the dose distribution improved when the needle was inserted into the most compared to the least sensitive needle track. The computation time for sequencing was less than 6 s per needle track. The proposed needle insertion sequencing can therefore assist in delivering an optimal dose in HDR prostate brachytherapy.

  2. Is plant mitochondrial RNA editing a source of phylogenetic incongruence? An answer from in silico and in vivo data sets.

    PubMed

    Picardi, Ernesto; Quagliariello, Carla

    2008-03-26

    In plant mitochondria, the post-transcriptional RNA editing process converts C to U at a number of specific sites of the mRNA sequence and usually restores phylogenetically conserved codons and the encoded amino acid residues. Sites undergoing RNA editing evolve at a higher rate than sites not modified by the process. As a result, editing sites strongly affect the evolution of plant mitochondrial genomes, representing an important source of sequence variability and potentially informative characters. To date no clear and convincing evidence has established whether or not editing sites really affect the topology of reconstructed phylogenetic trees. For this reason, we investigated here the effect of RNA editing on the tree building process of twenty different plant mitochondrial gene sequences and by means of computer simulations. Based on our simulation study we suggest that the editing 'noise' in tree topology inference is mainly manifested at the cDNA level. In particular, editing sites tend to confuse tree topologies when artificial genomic and cDNA sequences are generated shorter than 500 bp and with an editing percentage higher than 5.0%. Similar results have been also obtained with genuine plant mitochondrial genes. In this latter instance, indeed, the topology incongruence increases when the editing percentage goes up from about 3.0 to 14.0%. However, when the average gene length is higher than 1,000 bp (rps3, matR and atp1) no differences in the comparison between inferred genomic and cDNA topologies could be detected. Our findings by the here reported in silico and in vivo computer simulation system seem to strongly suggest that editing sites contribute in the generation of misleading phylogenetic trees if the analyzed mitochondrial gene sequence is highly edited (higher than 3.0%) and reduced in length (shorter than 500 bp). In the current lack of direct experimental evidence the results presented here encourage, thus, the use of genomic mitochondrial rather than cDNA sequences for reconstructing phylogenetic events in land plants.

  3. Simulation of the excitation of quasi-plane wake waves in a plasma by a resonant sequence of laser pulses with a variable envelope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinnikova, E. I.; Levchenko, V. D.

    2008-04-15

    Results are presented from full-scale numerical simulations of the excitation of wake waves by a sequence of weakly relativistic laser pulses in a subcritical plasma. Computations were carried out with a 2D3V version of the SUR-CA code that is based on the local-recursive nonlocal-asynchronous algorithm of the particle-in-cell method. The parameters of a train of laser pulses were chosen to correspond to the resonant excitation of the wake field. The curvature of the envelope of the pulses was chosen to depend on the number of the pulse in the train. Numerical simulations showed that there are plane waves during themore » first period of the plasma wave behind the pulse train.« less

  4. An application of sedimentation simulation in Tahe oilfield

    NASA Astrophysics Data System (ADS)

    Tingting, He; Lei, Zhao; Xin, Tan; Dongxu, He

    2017-12-01

    The braided river delta develops in Triassic low oil formation in the block 9 of Tahe oilfield, but its sedimentation evolution process is unclear. By using sedimentation simulation technology, sedimentation process and distribution of braided river delta are studied based on the geological parameters including sequence stratigraphic division, initial sedimentation environment, relative lake level change and accommodation change, source supply and sedimentary transport pattern. The simulation result shows that the error rate between strata thickness of simulation and actual strata thickness is small, and the single well analysis result of simulation is highly consistent with the actual analysis, which can prove that the model is reliable. The study area belongs to braided river delta retrogradation evolution process, which provides favorable basis for fine reservoir description and prediction.

  5. Effect of read-mapping biases on detecting allele-specific expression from RNA-sequencing data

    PubMed Central

    Degner, Jacob F.; Marioni, John C.; Pai, Athma A.; Pickrell, Joseph K.; Nkadori, Everlyne; Gilad, Yoav; Pritchard, Jonathan K.

    2009-01-01

    Motivation: Next-generation sequencing has become an important tool for genome-wide quantification of DNA and RNA. However, a major technical hurdle lies in the need to map short sequence reads back to their correct locations in a reference genome. Here, we investigate the impact of SNP variation on the reliability of read-mapping in the context of detecting allele-specific expression (ASE). Results: We generated 16 million 35 bp reads from mRNA of each of two HapMap Yoruba individuals. When we mapped these reads to the human genome we found that, at heterozygous SNPs, there was a significant bias toward higher mapping rates of the allele in the reference sequence, compared with the alternative allele. Masking known SNP positions in the genome sequence eliminated the reference bias but, surprisingly, did not lead to more reliable results overall. We find that even after masking, ∼5–10% of SNPs still have an inherent bias toward more effective mapping of one allele. Filtering out inherently biased SNPs removes 40% of the top signals of ASE. The remaining SNPs showing ASE are enriched in genes previously known to harbor cis-regulatory variation or known to show uniparental imprinting. Our results have implications for a variety of applications involving detection of alternate alleles from short-read sequence data. Availability: Scripts, written in Perl and R, for simulating short reads, masking SNP variation in a reference genome and analyzing the simulation output are available upon request from JFD. Raw short read data were deposited in GEO (http://www.ncbi.nlm.nih.gov/geo/) under accession number GSE18156. Contact: jdegner@uchicago.edu; marioni@uchicago.edu; gilad@uchicago.edu; pritch@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19808877

  6. A nonadaptive origin of a beneficial trait: in silico selection for free energy of folding leads to the neutral emergence of mutational robustness in single domain proteins.

    PubMed

    Pagan, Rafael F; Massey, Steven E

    2014-02-01

    Proteins are regarded as being robust to the deleterious effects of mutations. Here, the neutral emergence of mutational robustness in a population of single domain proteins is explored using computer simulations. A pairwise contact model was used to calculate the ΔG of folding (ΔG folding) using the three dimensional protein structure of leech eglin C. A random amino acid sequence with low mutational robustness, defined as the average ΔΔG resulting from a point mutation (ΔΔG average), was threaded onto the structure. A population of 1,000 threaded sequences was evolved under selection for stability, using an upper and lower energy threshold. Under these conditions, mutational robustness increased over time in the most common sequence in the population. In contrast, when the wild type sequence was used it did not show an increase in robustness. This implies that the emergence of mutational robustness is sequence specific and that wild type sequences may be close to maximal robustness. In addition, an inverse relationship between ∆∆G average and protein stability is shown, resulting partly from a larger average effect of point mutations in more stable proteins. The emergence of mutational robustness was also observed in the Escherichia coli colE1 Rop and human CD59 proteins, implying that the property may be common in single domain proteins under certain simulation conditions. The results indicate that at least a portion of mutational robustness in small globular proteins might have arisen by a process of neutral emergence, and could be an example of a beneficial trait that has not been directly selected for, termed a "pseudaptation."

  7. Assembling in Sequence: A Saleable Work Skill. Occupation Simulation Packet. Grades 3rd-4th.

    ERIC Educational Resources Information Center

    Hueston, Jean

    This teacher's guide for grades 3 and 4 contains simulated work experiences for students using the isolated skill concept - assembling in sequence. Teacher instructions include objectives, evaluation, and sequence of activities. The guide contains pre-tests and post-tests with instructions and answer keys. Three pre-skill activities are suggested,…

  8. Universal Sequence Replication, Reversible Polymerization and Early Functional Biopolymers: A Model for the Initiation of Prebiotic Sequence Evolution

    PubMed Central

    Walker, Sara Imari; Grover, Martha A.; Hud, Nicholas V.

    2012-01-01

    Many models for the origin of life have focused on understanding how evolution can drive the refinement of a preexisting enzyme, such as the evolution of efficient replicase activity. Here we present a model for what was, arguably, an even earlier stage of chemical evolution, when polymer sequence diversity was generated and sustained before, and during, the onset of functional selection. The model includes regular environmental cycles (e.g. hydration-dehydration cycles) that drive polymers between times of replication and functional activity, which coincide with times of different monomer and polymer diffusivity. Template-directed replication of informational polymers, which takes place during the dehydration stage of each cycle, is considered to be sequence-independent. New sequences are generated by spontaneous polymer formation, and all sequences compete for a finite monomer resource that is recycled via reversible polymerization. Kinetic Monte Carlo simulations demonstrate that this proposed prebiotic scenario provides a robust mechanism for the exploration of sequence space. Introduction of a polymer sequence with monomer synthetase activity illustrates that functional sequences can become established in a preexisting pool of otherwise non-functional sequences. Functional selection does not dominate system dynamics and sequence diversity remains high, permitting the emergence and spread of more than one functional sequence. It is also observed that polymers spontaneously form clusters in simulations where polymers diffuse more slowly than monomers, a feature that is reminiscent of a previous proposal that the earliest stages of life could have been defined by the collective evolution of a system-wide cooperation of polymer aggregates. Overall, the results presented demonstrate the merits of considering plausible prebiotic polymer chemistries and environments that would have allowed for the rapid turnover of monomer resources and for regularly varying monomer/polymer diffusivities. PMID:22493682

  9. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLoughlin, K.

    2016-01-22

    The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

  10. Effect of sequence-dependent rigidity on plectoneme localization in dsDNA

    NASA Astrophysics Data System (ADS)

    Medalion, Shlomi; Rabin, Yitzhak

    2016-04-01

    We use Monte-Carlo simulations to study the effect of variable rigidity on plectoneme formation and localization in supercoiled double-stranded DNA. We show that the presence of soft sequences increases the number of plectoneme branches and that the edges of the branches tend to be localized at these sequences. We propose an experimental approach to test our results in vitro, and discuss the possible role played by plectoneme localization in the search process of transcription factors for their targets (promoter regions) on the bacterial genome.

  11. Investigating the role of sliding friction in rolling motion: a teaching sequence based on experiments and simulations

    NASA Astrophysics Data System (ADS)

    De Ambrosis, Anna; Malgieri, Massimiliano; Mascheretti, Paolo; Onorato, Pasquale

    2015-05-01

    We designed a teaching-learning sequence on rolling motion, rooted in previous research about student conceptions, and proposing an educational reconstruction strongly centred on the role of friction in different cases of rolling. A series of experiments based on video analysis is used to highlight selected key concepts and to motivate students in their exploration of the topic; and interactive simulations, which can be modified on the fly by students to model different physical situations, are used to stimulate autonomous investigation in enquiry activities. The activity sequence was designed for students on introductory physics courses and was tested with a group of student teachers. Comparisons between pre- and post-tests, and between our results and those reported in the literature, indicate that students’ understanding of rolling motion improved markedly and some typical difficulties were overcome.

  12. A DS-UWB Cognitive Radio System Based on Bridge Function Smart Codes

    NASA Astrophysics Data System (ADS)

    Xu, Yafei; Hong, Sheng; Zhao, Guodong; Zhang, Fengyuan; di, Jinshan; Zhang, Qishan

    This paper proposes a direct-sequence UWB Gaussian pulse of cognitive radio systems based on bridge function smart sequence matrix and the Gaussian pulse. As the system uses the spreading sequence code, that is the bridge function smart code sequence, the zero correlation zones (ZCZs) which the bridge function sequences' auto-correlation functions had, could reduce multipath fading of the pulse interference. The Modulated channel signal was sent into the IEEE 802.15.3a UWB channel. We analysis the ZCZs's inhibition to the interference multipath interference (MPI), as one of the main system sources interferences. The simulation in SIMULINK/MATLAB is described in detail. The result shows the system has better performance by comparison with that employing Walsh sequence square matrix, and it was verified by the formula in principle.

  13. A Fast-Time Simulation Tool for Analysis of Airport Arrival Traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Meyn, Larry A.; Neuman, Frank

    2004-01-01

    The basic objective of arrival sequencing in air traffic control automation is to match traffic demand and airport capacity while minimizing delays. The performance of an automated arrival scheduling system, such as the Traffic Management Advisor developed by NASA for the FAA, can be studied by a fast-time simulation that does not involve running expensive and time-consuming real-time simulations. The fast-time simulation models runway configurations, the characteristics of arrival traffic, deviations from predicted arrival times, as well as the arrival sequencing and scheduling algorithm. This report reviews the development of the fast-time simulation method used originally by NASA in the design of the sequencing and scheduling algorithm for the Traffic Management Advisor. The utility of this method of simulation is demonstrated by examining the effect on delays of altering arrival schedules at a hub airport.

  14. Accuracy of UTE-MRI-based patient setup for brain cancer radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yingli; Cao, Minsong; Kaprealian, Tania

    2016-01-15

    Purpose: Radiation therapy simulations solely based on MRI have advantages compared to CT-based approaches. One feature readily available from computed tomography (CT) that would need to be reproduced with MR is the ability to compute digitally reconstructed radiographs (DRRs) for comparison against on-board radiographs commonly used for patient positioning. In this study, the authors generate MR-based bone images using a single ultrashort echo time (UTE) pulse sequence and quantify their 3D and 2D image registration accuracy to CT and radiographic images for treatments in the cranium. Methods: Seven brain cancer patients were scanned at 1.5 T using a radial UTEmore » sequence. The sequence acquired two images at two different echo times. The two images were processed using an in-house software to generate the UTE bone images. The resultant bone images were rigidly registered to simulation CT data and the registration error was determined using manually annotated landmarks as references. DRRs were created based on UTE-MRI and registered to simulated on-board images (OBIs) and actual clinical 2D oblique images from ExacTrac™. Results: UTE-MRI resulted in well visualized cranial, facial, and vertebral bones that quantitatively matched the bones in the CT images with geometric measurement errors of less than 1 mm. The registration error between DRRs generated from 3D UTE-MRI and the simulated 2D OBIs or the clinical oblique x-ray images was also less than 1 mm for all patients. Conclusions: UTE-MRI-based DRRs appear to be promising for daily patient setup of brain cancer radiotherapy with kV on-board imaging.« less

  15. Multiple alignment-free sequence comparison

    PubMed Central

    Ren, Jie; Song, Kai; Sun, Fengzhu; Deng, Minghua; Reinert, Gesine

    2013-01-01

    Motivation: Recently, a range of new statistics have become available for the alignment-free comparison of two sequences based on k-tuple word content. Here, we extend these statistics to the simultaneous comparison of more than two sequences. Our suite of statistics contains, first, and , extensions of statistics for pairwise comparison of the joint k-tuple content of all the sequences, and second, , and , averages of sums of pairwise comparison statistics. The two tasks we consider are, first, to identify sequences that are similar to a set of target sequences, and, second, to measure the similarity within a set of sequences. Results: Our investigation uses both simulated data as well as cis-regulatory module data where the task is to identify cis-regulatory modules with similar transcription factor binding sites. We find that although for real data, all of our statistics show a similar performance, on simulated data the Shepp-type statistics are in some instances outperformed by star-type statistics. The multiple alignment-free statistics are more sensitive to contamination in the data than the pairwise average statistics. Availability: Our implementation of the five statistics is available as R package named ‘multiAlignFree’ at be http://www-rcf.usc.edu/∼fsun/Programs/multiAlignFree/multiAlignFreemain.html. Contact: reinert@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23990418

  16. Development of a Coarse-grained Model of Polypeptoids for Studying Self-assembly in Solution

    NASA Astrophysics Data System (ADS)

    Du, Pu; Rick, Steven; Kumar, Revati

    Polypeptoid, a class of highly tunable biomimetic analogues of peptides, are used as a prototypical model system to study self-assembly. The focus of this work is to glean insight into the effect of electrostatic and other non-covalent secondary interactions on the self-assembly of sequence-defined polypeptoids, with different charged and uncharged side groups, in solution that will complement experiments. Atomistic (AA) molecular dynamics simulation can provide a complete description of self-assembly of polypeptoid systems. However, the long simulation length and time scales needed for these processes require the development of a computationally cheaper alternative, namely coarse-grained (CG) models. A CG model for studying polypeptoid micellar interactions is being developed, parameterized on atomistic simulations, using a hybridized approach involving the OPLS-UA force filed and the Stillinger-Weber (SW) potential form. The development of the model as well as the results from the simulations on the self-assembly as function of polypeptoid chemical structure and sequences will be presented.

  17. Detecting Recombination Hotspots from Patterns of Linkage Disequilibrium.

    PubMed

    Wall, Jeffrey D; Stevison, Laurie S

    2016-08-09

    With recent advances in DNA sequencing technologies, it has become increasingly easy to use whole-genome sequencing of unrelated individuals to assay patterns of linkage disequilibrium (LD) across the genome. One type of analysis that is commonly performed is to estimate local recombination rates and identify recombination hotspots from patterns of LD. One method for detecting recombination hotspots, LDhot, has been used in a handful of species to further our understanding of the basic biology of recombination. For the most part, the effectiveness of this method (e.g., power and false positive rate) is unknown. In this study, we run extensive simulations to compare the effectiveness of three different implementations of LDhot. We find large differences in the power and false positive rates of these different approaches, as well as a strong sensitivity to the window size used (with smaller window sizes leading to more accurate estimation of hotspot locations). We also compared our LDhot simulation results with comparable simulation results obtained from a Bayesian maximum-likelihood approach for identifying hotspots. Surprisingly, we found that the latter computationally intensive approach had substantially lower power over the parameter values considered in our simulations. Copyright © 2016 Wall and Stevison.

  18. Fossils out of sequence: Computer simulations and strategies for dealing with stratigraphic disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, A.H.; Flessa, K.W.

    Microstratigraphic resolution is limited by vertical mixing and reworking of fossils. Stratigraphic disorder is the degree to which fossils within a stratigraphic sequence are not in proper chronological order. Stratigraphic disorder arises through in situ vertical mixing of fossils and reworking of older fossils into younger deposits. The authors simulated the effects of mixing and reworking by simple computer models, and measured stratigraphic disorder using rank correlation between age and stratigraphic position (Spearman and Kendall coefficients). Mixing was simulated by randomly transposing pairs of adjacent fossils in a sequence. Reworking was simulated by randomly inserting older fossils into a youngermore » sequence. Mixing is an inefficient means of producing disorder; after 500 mixing steps stratigraphic order is still significant at the 99% to 95% level, depending on the coefficient used. Reworking disorders sequences very efficiently: significant order begins to be lost when reworked shells make up 35% of the sequence. Thus a sequence can be dominated by undisturbed, autochthonous shells and still be disordered. The effects of mixing-produced disorder can be minimized by increasing sample size at each horizon. Increased spacing between samples is of limited utility in dealing with disordered sequences: while widely separated samples are more likely to be stratigraphically ordered, the smaller number of samples makes the detection of trends problematic.« less

  19. The bioinformatics of nucleotide sequence coding for proteins requiring metal coenzymes and proteins embedded with metals

    NASA Astrophysics Data System (ADS)

    Tremberger, G.; Dehipawala, Sunil; Cheung, E.; Holden, T.; Sullivan, R.; Nguyen, A.; Lieberman, D.; Cheung, T.

    2015-09-01

    All metallo-proteins need post-translation metal incorporation. In fact, the isotope ratio of Fe, Cu, and Zn in physiology and oncology have emerged as an important tool. The nickel containing F430 is the prosthetic group of the enzyme methyl coenzyme M reductase which catalyzes the release of methane in the final step of methano-genesis, a prime energy metabolism candidate for life exploration space mission in the solar system. The 3.5 Gyr early life sulfite reductase as a life switch energy metabolism had Fe-Mo clusters. The nitrogenase for nitrogen fixation 3 billion years ago had Mo. The early life arsenite oxidase needed for anoxygenic photosynthesis energy metabolism 2.8 billion years ago had Mo and Fe. The selection pressure in metal incorporation inside a protein would be quantifiable in terms of the related nucleotide sequence complexity with fractal dimension and entropy values. Simulation model showed that the studied metal-required energy metabolism sequences had at least ten times more selection pressure relatively in comparison to the horizontal transferred sequences in Mealybug, guided by the outcome histogram of the correlation R-sq values. The metal energy metabolism sequence group was compared to the circadian clock KaiC sequence group using magnesium atomic level bond shifting mechanism in the protein, and the simulation model would suggest a much higher selection pressure for the energy life switch sequence group. The possibility of using Kepler 444 as an example of ancient life in Galaxy with the associated exoplanets has been proposed and is further discussed in this report. Examples of arsenic metal bonding shift probed by Synchrotron-based X-ray spectroscopy data and Zn controlled FOXP2 regulated pathways in human and chimp brain studied tissue samples are studied in relationship to the sequence bioinformatics. The analysis results suggest that relatively large metal bonding shift amount is associated with low probability correlation R-sq outcome in the bioinformatics simulation.

  20. ATP hydrolysis provides functions that promote rejection of pairings between different copies of long repeated sequences

    PubMed Central

    Danilowicz, Claudia; Hermans, Laura; Coljee, Vincent; Prévost, Chantal

    2017-01-01

    Abstract During DNA recombination and repair, RecA family proteins must promote rapid joining of homologous DNA. Repeated sequences with >100 base pair lengths occupy more than 1% of bacterial genomes; however, commitment to strand exchange was believed to occur after testing ∼20–30 bp. If that were true, pairings between different copies of long repeated sequences would usually become irreversible. Our experiments reveal that in the presence of ATP hydrolysis even 75 bp sequence-matched strand exchange products remain quite reversible. Experiments also indicate that when ATP hydrolysis is present, flanking heterologous dsDNA regions increase the reversibility of sequence matched strand exchange products with lengths up to ∼75 bp. Results of molecular dynamics simulations provide insight into how ATP hydrolysis destabilizes strand exchange products. These results inspired a model that shows how pairings between long repeated sequences could be efficiently rejected even though most homologous pairings form irreversible products. PMID:28854739

  1. Open-pNovo: De Novo Peptide Sequencing with Thousands of Protein Modifications.

    PubMed

    Yang, Hao; Chi, Hao; Zhou, Wen-Jing; Zeng, Wen-Feng; He, Kun; Liu, Chao; Sun, Rui-Xiang; He, Si-Min

    2017-02-03

    De novo peptide sequencing has improved remarkably, but sequencing full-length peptides with unexpected modifications is still a challenging problem. Here we present an open de novo sequencing tool, Open-pNovo, for de novo sequencing of peptides with arbitrary types of modifications. Although the search space increases by ∼300 times, Open-pNovo is close to or even ∼10-times faster than the other three proposed algorithms. Furthermore, considering top-1 candidates on three MS/MS data sets, Open-pNovo can recall over 90% of the results obtained by any one traditional algorithm and report 5-87% more peptides, including 14-250% more modified peptides. On a high-quality simulated data set, ∼85% peptides with arbitrary modifications can be recalled by Open-pNovo, while hardly any results can be recalled by others. In summary, Open-pNovo is an excellent tool for open de novo sequencing and has great potential for discovering unexpected modifications in the real biological applications.

  2. Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data

    PubMed Central

    2014-01-01

    Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189

  3. Computational Simulation of Composite Structural Fatigue

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos C. (Technical Monitor)

    2005-01-01

    Progressive damage and fracture of composite structures subjected to monotonically increasing static, tension-tension cyclic, pressurization, and flexural cyclic loading are evaluated via computational simulation. Constituent material properties, stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for composites. Damage initiation, growth, accumulation, and propagation to fracture due to monotonically increasing static and cyclic loads are included in the simulations. Results show the number of cycles to failure at different temperatures and the damage progression sequence during different degradation stages. A procedure is outlined for use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of results with insight for design decisions.

  4. Computational Simulation of Composite Structural Fatigue

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    2004-01-01

    Progressive damage and fracture of composite structures subjected to monotonically increasing static, tension-tension cyclic, pressurization, and flexural cyclic loading are evaluated via computational simulation. Constituent material properties, stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for composites. Damage initiation, growth, accumulation, and propagation to fracture due to monotonically increasing static and cyclic loads are included in the simulations. Results show the number of cycles to failure at different temperatures and the damage progression sequence during different degradation stages. A procedure is outlined for use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of results with insight for design decisions.

  5. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  6. A generalized theoretical framework for the description of spin decoupling in solid-state MAS NMR: Offset effect on decoupling performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Kong Ooi; Meier, Beat H., E-mail: beme@ethz.ch, E-mail: maer@ethz.ch; Ernst, Matthias, E-mail: beme@ethz.ch, E-mail: maer@ethz.ch

    2016-09-07

    We present a generalized theoretical framework that allows the approximate but rapid analysis of residual couplings of arbitrary decoupling sequences in solid-state NMR under magic-angle spinning conditions. It is a generalization of the tri-modal Floquet analysis of TPPM decoupling [Scholz et al., J. Chem. Phys. 130, 114510 (2009)] where three characteristic frequencies are used to describe the pulse sequence. Such an approach can be used to describe arbitrary periodic decoupling sequences that differ only in the magnitude of the Fourier coefficients of the interaction-frame transformation. It allows a ∼100 times faster calculation of second-order residual couplings as a function ofmore » pulse sequence parameters than full spin-dynamics simulations. By comparing the theoretical calculations with full numerical simulations, we show the potential of the new approach to examine the performance of decoupling sequences. We exemplify the usefulness of this framework by analyzing the performance of commonly used high-power decoupling sequences and low-power decoupling sequences such as amplitude-modulated XiX (AM-XiX) and its super-cycled variant SC-AM-XiX. In addition, the effect of chemical-shift offset is examined for both high- and low-power decoupling sequences. The results show that the cross-terms between the dipolar couplings are the main contributions to the line broadening when offset is present. We also show that the SC-AM-XIX shows a better offset compensation.« less

  7. A generalized theoretical framework for the description of spin decoupling in solid-state MAS NMR: Offset effect on decoupling performance.

    PubMed

    Tan, Kong Ooi; Agarwal, Vipin; Meier, Beat H; Ernst, Matthias

    2016-09-07

    We present a generalized theoretical framework that allows the approximate but rapid analysis of residual couplings of arbitrary decoupling sequences in solid-state NMR under magic-angle spinning conditions. It is a generalization of the tri-modal Floquet analysis of TPPM decoupling [Scholz et al., J. Chem. Phys. 130, 114510 (2009)] where three characteristic frequencies are used to describe the pulse sequence. Such an approach can be used to describe arbitrary periodic decoupling sequences that differ only in the magnitude of the Fourier coefficients of the interaction-frame transformation. It allows a ∼100 times faster calculation of second-order residual couplings as a function of pulse sequence parameters than full spin-dynamics simulations. By comparing the theoretical calculations with full numerical simulations, we show the potential of the new approach to examine the performance of decoupling sequences. We exemplify the usefulness of this framework by analyzing the performance of commonly used high-power decoupling sequences and low-power decoupling sequences such as amplitude-modulated XiX (AM-XiX) and its super-cycled variant SC-AM-XiX. In addition, the effect of chemical-shift offset is examined for both high- and low-power decoupling sequences. The results show that the cross-terms between the dipolar couplings are the main contributions to the line broadening when offset is present. We also show that the SC-AM-XIX shows a better offset compensation.

  8. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  9. Role of Sequence and Structural Polymorphism on the Mechanical Properties of Amyloid Fibrils

    PubMed Central

    Kim, Jae In; Na, Sungsoo; Eom, Kilho

    2014-01-01

    Amyloid fibrils playing a critical role in disease expression, have recently been found to exhibit the excellent mechanical properties such as elastic modulus in the order of 10 GPa, which is comparable to that of other mechanical proteins such as microtubule, actin filament, and spider silk. These remarkable mechanical properties of amyloid fibrils are correlated with their functional role in disease expression. This suggests the importance in understanding how these excellent mechanical properties are originated through self-assembly process that may depend on the amino acid sequence. However, the sequence-structure-property relationship of amyloid fibrils has not been fully understood yet. In this work, we characterize the mechanical properties of human islet amyloid polypeptide (hIAPP) fibrils with respect to their molecular structures as well as their amino acid sequence by using all-atom explicit water molecular dynamics (MD) simulation. The simulation result suggests that the remarkable bending rigidity of amyloid fibrils can be achieved through a specific self-aggregation pattern such as antiparallel stacking of β strands (peptide chain). Moreover, we have shown that a single point mutation of hIAPP chain constituting a hIAPP fibril significantly affects the thermodynamic stability of hIAPP fibril formed by parallel stacking of peptide chain, and that a single point mutation results in a significant change in the bending rigidity of hIAPP fibrils formed by antiparallel stacking of β strands. This clearly elucidates the role of amino acid sequence on not only the equilibrium conformations of amyloid fibrils but also their mechanical properties. Our study sheds light on sequence-structure-property relationships of amyloid fibrils, which suggests that the mechanical properties of amyloid fibrils are encoded in their sequence-dependent molecular architecture. PMID:24551113

  10. GASP: Gapped Ancestral Sequence Prediction for proteins

    PubMed Central

    Edwards, Richard J; Shields, Denis C

    2004-01-01

    Background The prediction of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. Predicting ancestral sequences is not a simple procedure and relies on accurate alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments. Results Here we present a new algorithm, GASP (Gapped Ancestral Sequence Prediction), for predicting ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make predictions. GASP was tested on a number of simulated datasets based on real phylogenies. Prediction accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy. Conclusions GASP (Gapped Ancestral Sequence Prediction) will predict ancestral sequences from multiple protein alignments of any size. Although not as accurate in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will predict ancestral sequences for gapped and ungapped residues alike. PMID:15350199

  11. SeqSIMLA2_exact: simulate multiple disease sites in large pedigrees with given disease status for diseases with low prevalence.

    PubMed

    Yao, Po-Ju; Chung, Ren-Hua

    2016-02-15

    It is difficult for current simulation tools to simulate sequence data in a pre-specified pedigree structure and pre-specified affection status. Previously, we developed a flexible tool, SeqSIMLA2, for simulating sequence data in either unrelated case-control or family samples with different disease and quantitative trait models. Here we extended the tool to efficiently simulate sequences with multiple disease sites in large pedigrees with a given disease status for each pedigree member, assuming that the disease prevalence is low. SeqSIMLA2_exact is implemented with C++ and is available at http://seqsimla.sourceforge.net. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Deconvoluting simulated metagenomes: the performance of hard- and soft- clustering algorithms applied to metagenomic chromosome conformation capture (3C)

    PubMed Central

    DeMaere, Matthew Z.

    2016-01-01

    Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713

  13. Principles of protein folding--a perspective from simple exact models.

    PubMed Central

    Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.

    1995-01-01

    General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459

  14. RAPTR-SV: a hybrid method for the detection of structural variants

    USDA-ARS?s Scientific Manuscript database

    Motivation: Identification of Structural Variants (SV) in sequence data results in a large number of false positive calls using existing software, which overburdens subsequent validation. Results: Simulations using RAPTR-SV and another software package that uses a similar algorithm for SV detection...

  15. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  16. Dynamic Modeling of Starting Aerodynamics and Stage Matching in an Axi-Centrifugal Compressor

    NASA Technical Reports Server (NTRS)

    Wilkes, Kevin; OBrien, Walter F.; Owen, A. Karl

    1996-01-01

    A DYNamic Turbine Engine Compressor Code (DYNTECC) has been modified to model speed transients from 0-100% of compressor design speed. The impetus for this enhancement was to investigate stage matching and stalling behavior during a start sequence as compared to rotating stall events above ground idle. The model can simulate speed and throttle excursions simultaneously as well as time varying bleed flow schedules. Results of a start simulation are presented and compared to experimental data obtained from an axi-centrifugal turboshaft engine and companion compressor rig. Stage by stage comparisons reveal the front stages to be operating in or near rotating stall through most of the start sequence. The model matches the starting operating line quite well in the forward stages with deviations appearing in the rearward stages near the start bleed. Overall, the performance of the model is very promising and adds significantly to the dynamic simulation capabilities of DYNTECC.

  17. Ordered shotgun sequencing of a 135 kb Xq25 YAC containing ANT2 and four possible genes, including three confirmed by EST matches.

    PubMed Central

    Chen, C N; Su, Y; Baybayan, P; Siruno, A; Nagaraja, R; Mazzarella, R; Schlessinger, D; Chen, E

    1996-01-01

    Ordered shotgun sequencing (OSS) has been successfully carried out with an Xq25 YAC substrate. yWXD703 DNA was subcloned into lambda phage and sequences of insert ends of the lambda subclones were used to generate a map to select a minimum tiling path of clones to be completely sequenced. The sequence of 135 038 nt contains the entire ANT2 cDNA as well as four other candidates suggested by computer-assisted analyses. One of the putative genes is homologous to a gene implicated in Graves' disease and it, ANT2 and two others are confirmed by EST matches. The results suggest that OSS can be applied to YACs in accord with earlier simulations and further indicate that the sequence of the YAC accurately reflects the sequence of uncloned human DNA. PMID:8918809

  18. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Chaos game representation walk model for the protein sequences

    NASA Astrophysics Data System (ADS)

    Gao, Jie; Jiang, Li-Li; Xu, Zhen-Yuan

    2009-10-01

    A new chaos game representation of protein sequences based on the detailed hydrophobic-hydrophilic (HP) model has been proposed by Yu et al (Physica A 337 (2004) 171). A CGR-walk model is proposed based on the new CGR coordinates for the protein sequences from complete genomes in the present paper. The new CGR coordinates based on the detailed HP model are converted into a time series, and a long-memory ARFIMA(p, d, q) model is introduced into the protein sequence analysis. This model is applied to simulating real CGR-walk sequence data of twelve protein sequences. Remarkably long-range correlations are uncovered in the data and the results obtained from these models are reasonably consistent with those available from the ARFIMA(p, d, q) model.

  19. Sequence-dependent nucleosome sliding in rotation-coupled and uncoupled modes revealed by molecular simulations

    PubMed Central

    Tan, Cheng; Takada, Shoji

    2017-01-01

    While nucleosome positioning on eukaryotic genome play important roles for genetic regulation, molecular mechanisms of nucleosome positioning and sliding along DNA are not well understood. Here we investigated thermally-activated spontaneous nucleosome sliding mechanisms developing and applying a coarse-grained molecular simulation method that incorporates both long-range electrostatic and short-range hydrogen-bond interactions between histone octamer and DNA. The simulations revealed two distinct sliding modes depending on the nucleosomal DNA sequence. A uniform DNA sequence showed frequent sliding with one base pair step in a rotation-coupled manner, akin to screw-like motions. On the contrary, a strong positioning sequence, the so-called 601 sequence, exhibits rare, abrupt transitions of five and ten base pair steps without rotation. Moreover, we evaluated the importance of hydrogen bond interactions on the sliding mode, finding that strong and weak bonds favor respectively the rotation-coupled and -uncoupled sliding movements. PMID:29194442

  20. Real time simulation of computer-assisted sequencing of terminal area operations

    NASA Technical Reports Server (NTRS)

    Dear, R. G.

    1981-01-01

    A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.

  1. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.

    PubMed

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R

    2009-07-01

    The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/

  2. Improving the realism of white matter numerical phantoms: a step towards a better understanding of the influence of structural disorders in diffusion MRI

    NASA Astrophysics Data System (ADS)

    Ginsburger, Kévin; Poupon, Fabrice; Beaujoin, Justine; Estournet, Delphine; Matuschke, Felix; Mangin, Jean-François; Axer, Markus; Poupon, Cyril

    2018-02-01

    White matter is composed of irregularly packed axons leading to a structural disorder in the extra-axonal space. Diffusion MRI experiments using oscillating gradient spin echo sequences have shown that the diffusivity transverse to axons in this extra-axonal space is dependent on the frequency of the employed sequence. In this study, we observe the same frequency-dependence using 3D simulations of the diffusion process in disordered media. We design a novel white matter numerical phantom generation algorithm which constructs biomimicking geometric configurations with few design parameters, and enables to control the level of disorder of the generated phantoms. The influence of various geometrical parameters present in white matter, such as global angular dispersion, tortuosity, presence of Ranvier nodes, beading, on the extra-cellular perpendicular diffusivity frequency dependence was investigated by simulating the diffusion process in numerical phantoms of increasing complexity and fitting the resulting simulated diffusion MR signal attenuation with an adequate analytical model designed for trapezoidal OGSE sequences. This work suggests that angular dispersion and especially beading have non-negligible effects on this extracellular diffusion metrics that may be measured using standard OGSE DW-MRI clinical protocols.

  3. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  4. High performance MRI simulations of motion on multi-GPU systems.

    PubMed

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation of realistic motion models, such as cardiac motion, respiratory motion and flow may benefit the design and optimization of existing or new MR pulse sequences, protocols and algorithms, which examine motion related MR applications.

  5. Optimizing taxonomic classification of marker-gene amplicon sequences with QIIME 2's q2-feature-classifier plugin.

    PubMed

    Bokulich, Nicholas A; Kaehler, Benjamin D; Rideout, Jai Ram; Dillon, Matthew; Bolyen, Evan; Knight, Rob; Huttley, Gavin A; Gregory Caporaso, J

    2018-05-17

    Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier ( https://github.com/qiime2/q2-feature-classifier ), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated "novel" marker-gene sequences, are available in our extensible benchmarking framework, tax-credit ( https://github.com/caporaso-lab/tax-credit-data ). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.

  6. Experiment evaluation of speckle suppression efficiency of 2D quasi-spiral M-sequence-based diffractive optical element.

    PubMed

    Lapchuk, A; Pashkevich, G A; Prygun, O V; Yurlov, V; Borodin, Y; Kryuchyn, A; Korchovyi, A A; Shylo, S

    2015-10-01

    The quasi-spiral 2D diffractive optical element (DOE) based on M-sequence of length N=15 is designed and manufactured. The speckle suppression efficiency by the DOE rotation is measured. The speckle suppression coefficients of 10.5, 6, and 4 are obtained for green, violet, and red laser beams, respectively. The results of numerical simulation and experimental data show that the quasi-spiral binary DOE structure can be as effective in speckle reduction as a periodic 2D DOE structure. The numerical simulation and experimental results show that the speckle suppression efficiency of the 2D DOE structure decreases approximately twice at the boundaries of the visible range. It is shown that a replacement of this structure with the bilateral 1D DOE allows obtaining the maximum speckle suppression efficiency in the entire visible range of light.

  7. Representation of DNA sequences in genetic codon context with applications in exon and intron prediction.

    PubMed

    Yin, Changchuan

    2015-04-01

    To apply digital signal processing (DSP) methods to analyze DNA sequences, the sequences first must be specially mapped into numerical sequences. Thus, effective numerical mappings of DNA sequences play key roles in the effectiveness of DSP-based methods such as exon prediction. Despite numerous mappings of symbolic DNA sequences to numerical series, the existing mapping methods do not include the genetic coding features of DNA sequences. We present a novel numerical representation of DNA sequences using genetic codon context (GCC) in which the numerical values are optimized by simulation annealing to maximize the 3-periodicity signal to noise ratio (SNR). The optimized GCC representation is then applied in exon and intron prediction by Short-Time Fourier Transform (STFT) approach. The results show the GCC method enhances the SNR values of exon sequences and thus increases the accuracy of predicting protein coding regions in genomes compared with the commonly used 4D binary representation. In addition, this study offers a novel way to reveal specific features of DNA sequences by optimizing numerical mappings of symbolic DNA sequences.

  8. Sequence memory based on coherent spin-interaction neural networks.

    PubMed

    Xia, Min; Wong, W K; Wang, Zhijie

    2014-12-01

    Sequence information processing, for instance, the sequence memory, plays an important role on many functions of brain. In the workings of the human brain, the steady-state period is alterable. However, in the existing sequence memory models using heteroassociations, the steady-state period cannot be changed in the sequence recall. In this work, a novel neural network model for sequence memory with controllable steady-state period based on coherent spininteraction is proposed. In the proposed model, neurons fire collectively in a phase-coherent manner, which lets a neuron group respond differently to different patterns and also lets different neuron groups respond differently to one pattern. The simulation results demonstrating the performance of the sequence memory are presented. By introducing a new coherent spin-interaction sequence memory model, the steady-state period can be controlled by dimension parameters and the overlap between the input pattern and the stored patterns. The sequence storage capacity is enlarged by coherent spin interaction compared with the existing sequence memory models. Furthermore, the sequence storage capacity has an exponential relationship to the dimension of the neural network.

  9. Misconceptions on Missing Data in RAD-seq Phylogenetics with a Deep-scale Example from Flowering Plants.

    PubMed

    Eaton, Deren A R; Spriggs, Elizabeth L; Park, Brian; Donoghue, Michael J

    2017-05-01

    Restriction-site associated DNA (RAD) sequencing and related methods rely on the conservation of enzyme recognition sites to isolate homologous DNA fragments for sequencing, with the consequence that mutations disrupting these sites lead to missing information. There is thus a clear expectation for how missing data should be distributed, with fewer loci recovered between more distantly related samples. This observation has led to a related expectation: that RAD-seq data are insufficiently informative for resolving deeper scale phylogenetic relationships. Here we investigate the relationship between missing information among samples at the tips of a tree and information at edges within it. We re-analyze and review the distribution of missing data across ten RAD-seq data sets and carry out simulations to determine expected patterns of missing information. We also present new empirical results for the angiosperm clade Viburnum (Adoxaceae, with a crown age >50 Ma) for which we examine phylogenetic information at different depths in the tree and with varied sequencing effort. The total number of loci, the proportion that are shared, and phylogenetic informativeness varied dramatically across the examined RAD-seq data sets. Insufficient or uneven sequencing coverage accounted for similar proportions of missing data as dropout from mutation-disruption. Simulations reveal that mutation-disruption, which results in phylogenetically distributed missing data, can be distinguished from the more stochastic patterns of missing data caused by low sequencing coverage. In Viburnum, doubling sequencing coverage nearly doubled the number of parsimony informative sites, and increased by >10X the number of loci with data shared across >40 taxa. Our analysis leads to a set of practical recommendations for maximizing phylogenetic information in RAD-seq studies. [hierarchical redundancy; phylogenetic informativeness; quartet informativeness; Restriction-site associated DNA (RAD) sequencing; sequencing coverage; Viburnum.]. © The authors 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.

  10. Quick, sensitive and specific detection and evaluation of quantification of minor variants by high-throughput sequencing.

    PubMed

    Leung, Ross Ka-Kit; Dong, Zhi Qiang; Sa, Fei; Chong, Cheong Meng; Lei, Si Wan; Tsui, Stephen Kwok-Wing; Lee, Simon Ming-Yuen

    2014-02-01

    Minor variants have significant implications in quasispecies evolution, early cancer detection and non-invasive fetal genotyping but their accurate detection by next-generation sequencing (NGS) is hampered by sequencing errors. We generated sequencing data from mixtures at predetermined ratios in order to provide insight into sequencing errors and variations that can arise for which simulation cannot be performed. The information also enables better parameterization in depth of coverage, read quality and heterogeneity, library preparation techniques, technical repeatability for mathematical modeling, theory development and simulation experimental design. We devised minor variant authentication rules that achieved 100% accuracy in both testing and validation experiments. The rules are free from tedious inspection of alignment accuracy, sequencing read quality or errors introduced by homopolymers. The authentication processes only require minor variants to: (1) have minimum depth of coverage larger than 30; (2) be reported by (a) four or more variant callers, or (b) DiBayes or LoFreq, plus SNVer (or BWA when no results are returned by SNVer), and with the interassay coefficient of variation (CV) no larger than 0.1. Quantification accuracy undermined by sequencing errors could neither be overcome by ultra-deep sequencing, nor recruiting more variant callers to reach a consensus, such that consistent underestimation and overestimation (i.e. low CV) were observed. To accommodate stochastic error and adjust the observed ratio within a specified accuracy, we presented a proof of concept for the use of a double calibration curve for quantification, which provides an important reference towards potential industrial-scale fabrication of calibrants for NGS.

  11. Accuracy and Reproducibility of Adipose Tissue Measurements in Young Infants by Whole Body Magnetic Resonance Imaging

    PubMed Central

    Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans

    2015-01-01

    Purpose MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. Material and Methods MR images of ten phantoms simulating subcutaneous fat of an infant’s torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. Results In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. Conclusion With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy. PMID:25706876

  12. Solid-state NMR adiabatic TOBSY sequences provide enhanced sensitivity for multidimensional high-resolution magic-angle-spinning 1H MR spectroscopy

    NASA Astrophysics Data System (ADS)

    Andronesi, Ovidiu C.; Mintzopoulos, Dionyssios; Struppe, Jochem; Black, Peter M.; Tzika, A. Aria

    2008-08-01

    We propose a solid-state NMR method that maximizes the advantages of high-resolution magic-angle-spinning (HRMAS) applied to intact biopsies when compared to more conventional liquid-state NMR approaches. Theoretical treatment, numerical simulations and experimental results on intact human brain biopsies are presented. Experimentally, it is proven that an optimized adiabatic TOBSY (TOtal through Bond correlation SpectroscopY) solid-state NMR pulse sequence for two-dimensional 1H- 1H homonuclear scalar-coupling longitudinal isotropic mixing provides a 20%-50% improvement in signal-to-noise ratio relative to its liquid-state analogue TOCSY (TOtal Correlation SpectroscopY). For this purpose we have refined the C9151 symmetry-based 13C TOBSY pulse sequence for 1H MRS use and compared it to MLEV-16 TOCSY sequence. Both sequences were rotor-synchronized and implemented using WURST-8 adiabatic inversion pulses. As discussed theoretically and shown in simulations, the improved magnetization-transfer comes from actively removing residual dipolar couplings from the average Hamiltonian. Importantly, the solid-state NMR techniques are tailored to perform measurements at low temperatures where sample degradation is reduced. This is the first demonstration of such a concept for HRMAS metabolic profiling of disease processes, including cancer, from biopsies requiring reduced sample degradation for further genomic analysis.

  13. A Hybrid Parachute Simulation Environment for the Orion Parachute Development Project

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    A parachute simulation environment (PSE) has been developed that aims to take advantage of legacy parachute simulation codes and modern object-oriented programming techniques. This hybrid simulation environment provides the parachute analyst with a natural and intuitive way to construct simulation tasks while preserving the pedigree and authority of established parachute simulations. NASA currently employs four simulation tools for developing and analyzing air-drop tests performed by the CEV Parachute Assembly System (CPAS) Project. These tools were developed at different times, in different languages, and with different capabilities in mind. As a result, each tool has a distinct interface and set of inputs and outputs. However, regardless of the simulation code that is most appropriate for the type of test, engineers typically perform similar tasks for each drop test such as prediction of loads, assessment of altitude, and sequencing of disreefs or cut-aways. An object-oriented approach to simulation configuration allows the analyst to choose models of real physical test articles (parachutes, vehicles, etc.) and sequence them to achieve the desired test conditions. Once configured, these objects are translated into traditional input lists and processed by the legacy simulation codes. This approach minimizes the number of sim inputs that the engineer must track while configuring an input file. An object oriented approach to simulation output allows a common set of post-processing functions to perform routine tasks such as plotting and timeline generation with minimal sensitivity to the simulation that generated the data. Flight test data may also be translated into the common output class to simplify test reconstruction and analysis.

  14. An FPGA-based DS-CDMA multiuser demodulator employing adaptive multistage parallel interference cancellation

    NASA Astrophysics Data System (ADS)

    Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi

    2009-12-01

    Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.

  15. An Exercise Health Simulation Method Based on Integrated Human Thermophysiological Model

    PubMed Central

    Chen, Xiaohui; Yu, Liang; Yang, Kaixing

    2017-01-01

    Research of healthy exercise has garnered a keen research for the past few years. It is known that participation in a regular exercise program can help improve various aspects of cardiovascular function and reduce the risk of suffering from illness. But some exercise accidents like dehydration, exertional heatstroke, and even sudden death need to be brought to attention. If these exercise accidents can be analyzed and predicted before they happened, it will be beneficial to alleviate or avoid disease or mortality. To achieve this objective, an exercise health simulation approach is proposed, in which an integrated human thermophysiological model consisting of human thermal regulation model and a nonlinear heart rate regulation model is reported. The human thermoregulatory mechanism as well as the heart rate response mechanism during exercise can be simulated. On the basis of the simulated physiological indicators, a fuzzy finite state machine is constructed to obtain the possible health transition sequence and predict the exercise health status. The experiment results show that our integrated exercise thermophysiological model can numerically simulate the thermal and physiological processes of the human body during exercise and the predicted exercise health transition sequence from finite state machine can be used in healthcare. PMID:28702074

  16. Use of simulated data sets to evaluate the fidelity of metagenomic processing methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavromatis, K; Ivanova, N; Barry, Kerrie

    2007-01-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and twomore » sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less

  17. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and twomore » sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less

  18. On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution

    NASA Astrophysics Data System (ADS)

    Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein

    2018-07-01

    One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.

  19. On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution

    NASA Astrophysics Data System (ADS)

    Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein

    2017-12-01

    One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.

  20. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimentalmore » data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.« less

  1. Sequence Memory Constraints Give Rise to Language-Like Structure through Iterated Learning

    PubMed Central

    Cornish, Hannah; Dale, Rick; Kirby, Simon; Christiansen, Morten H.

    2017-01-01

    Human language is composed of sequences of reusable elements. The origins of the sequential structure of language is a hotly debated topic in evolutionary linguistics. In this paper, we show that sets of sequences with language-like statistical properties can emerge from a process of cultural evolution under pressure from chunk-based memory constraints. We employ a novel experimental task that is non-linguistic and non-communicative in nature, in which participants are trained on and later asked to recall a set of sequences one-by-one. Recalled sequences from one participant become training data for the next participant. In this way, we simulate cultural evolution in the laboratory. Our results show a cumulative increase in structure, and by comparing this structure to data from existing linguistic corpora, we demonstrate a close parallel between the sets of sequences that emerge in our experiment and those seen in natural language. PMID:28118370

  2. Sequence Memory Constraints Give Rise to Language-Like Structure through Iterated Learning.

    PubMed

    Cornish, Hannah; Dale, Rick; Kirby, Simon; Christiansen, Morten H

    2017-01-01

    Human language is composed of sequences of reusable elements. The origins of the sequential structure of language is a hotly debated topic in evolutionary linguistics. In this paper, we show that sets of sequences with language-like statistical properties can emerge from a process of cultural evolution under pressure from chunk-based memory constraints. We employ a novel experimental task that is non-linguistic and non-communicative in nature, in which participants are trained on and later asked to recall a set of sequences one-by-one. Recalled sequences from one participant become training data for the next participant. In this way, we simulate cultural evolution in the laboratory. Our results show a cumulative increase in structure, and by comparing this structure to data from existing linguistic corpora, we demonstrate a close parallel between the sets of sequences that emerge in our experiment and those seen in natural language.

  3. DNA-based random number generation in security circuitry.

    PubMed

    Gearheart, Christy M; Arazi, Benjamin; Rouchka, Eric C

    2010-06-01

    DNA-based circuit design is an area of research in which traditional silicon-based technologies are replaced by naturally occurring phenomena taken from biochemistry and molecular biology. This research focuses on further developing DNA-based methodologies to mimic digital data manipulation. While exhibiting fundamental principles, this work was done in conjunction with the vision that DNA-based circuitry, when the technology matures, will form the basis for a tamper-proof security module, revolutionizing the meaning and concept of tamper-proofing and possibly preventing it altogether based on accurate scientific observations. A paramount part of such a solution would be self-generation of random numbers. A novel prototype schema employs solid phase synthesis of oligonucleotides for random construction of DNA sequences; temporary storage and retrieval is achieved through plasmid vectors. A discussion of how to evaluate sequence randomness is included, as well as how these techniques are applied to a simulation of the random number generation circuitry. Simulation results show generated sequences successfully pass three selected NIST random number generation tests specified for security applications.

  4. Information and redundancy in the burial folding code of globular proteins within a wide range of shapes and sizes.

    PubMed

    Ferreira, Diogo C; van der Linden, Marx G; de Oliveira, Leandro C; Onuchic, José N; de Araújo, Antônio F Pereira

    2016-04-01

    Recent ab initio folding simulations for a limited number of small proteins have corroborated a previous suggestion that atomic burial information obtainable from sequence could be sufficient for tertiary structure determination when combined to sequence-independent geometrical constraints. Here, we use simulations parameterized by native burials to investigate the required amount of information in a diverse set of globular proteins comprising different structural classes and a wide size range. Burial information is provided by a potential term pushing each atom towards one among a small number L of equiprobable concentric layers. An upper bound for the required information is provided by the minimal number of layers L(min) still compatible with correct folding behavior. We obtain L(min) between 3 and 5 for seven small to medium proteins with 50 ≤ Nr ≤ 110 residues while for a larger protein with Nr = 141 we find that L ≥ 6 is required to maintain native stability. We additionally estimate the usable redundancy for a given L ≥ L(min) from the burial entropy associated to the largest folding-compatible fraction of "superfluous" atoms, for which the burial term can be turned off or target layers can be chosen randomly. The estimated redundancy for small proteins with L = 4 is close to 0.8. Our results are consistent with the above-average quality of burial predictions used in previous simulations and indicate that the fraction of approachable proteins could increase significantly with even a mild, plausible, improvement on sequence-dependent burial prediction or on sequence-independent constraints that augment the detectable redundancy during simulations. © 2016 Wiley Periodicals, Inc.

  5. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Y. C.; Sayood, Khalid; Nelson, D. J.

    1991-01-01

    We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  6. A robust coding scheme for packet video

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.

    1992-01-01

    A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.

  7. Sequence Directionality Dramatically Affects LCST Behavior of Elastin-Like Polypeptides.

    PubMed

    Li, Nan K; Roberts, Stefan; Quiroz, Felipe Garcia; Chilkoti, Ashutosh; Yingling, Yaroslava G

    2018-04-30

    Elastin-like polypeptides (ELP) exhibit an inverse temperature transition or lower critical solution temperature (LCST) transition phase behavior in aqueous solutions. In this paper, the thermal responsive properties of the canonical ELP, poly(VPGVG), and its reverse sequence poly(VGPVG) were investigated by turbidity measurements of the cloud point behavior, circular dichroism (CD) measurements, and all-atom molecular dynamics (MD) simulations to gain a molecular understanding of mechanism that controls hysteretic phase behavior. It was shown experimentally that both poly(VPGVG) and poly(VGPVG) undergo a transition from soluble to insoluble in aqueous solution upon heating above the transition temperature ( T t ). However, poly(VPGVG) resolubilizes upon cooling below its T t , whereas the reverse sequence, poly(VGPVG), remains aggregated despite significant undercooling below the T t . The results from MD simulations indicated that a change in sequence order results in significant differences in the dynamics of the specific residues, especially valines, which lead to extensive changes in the conformations of VPGVG and VGPVG pentamers and, consequently, dissimilar propensities for secondary structure formation and overall structure of polypeptides. These changes affected the relative hydrophilicities of polypeptides above T t , where poly(VGPVG) is more hydrophilic than poly(VPGVG) with more extended conformation and larger surface area, which led to formation of strong interchain hydrogen bonds responsible for stabilization of the aggregated phase and the observed thermal hysteresis for poly(VGPVG).

  8. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    NASA Astrophysics Data System (ADS)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  9. Simulative research on generating UWB signals by all-optical BPF

    NASA Astrophysics Data System (ADS)

    Yang, Chunyong; Hou, Rui; Chen, Shaoping

    2007-11-01

    The simulating technique is used to investigate generating and distributing Ultra-Wide-Band signals depend on fiber transmission. Numerical result for the system about the frequency response shows that the characteristics of band-pass filter is presented, and the shorter the wavelength is, the bandwidth of lower frequency is wider. Transmission performance simulation for 12.5Gb/s psudo-random sequence also shows that Gaussian pulse signal after transported in fiber is similar to UWB wave pattern mask of FCC in time domain and frequency spectrum specification of FCC in frequency domain .

  10. Application of artificial neural networks to identify equilibration in computer simulations

    NASA Astrophysics Data System (ADS)

    Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric

    2017-11-01

    Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.

  11. A fortran program for Monte Carlo simulation of oil-field discovery sequences

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Davis, J.C.

    1993-01-01

    We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.

  12. Cloning the Gravity and Shear Stress Related Genes from MG-63 Cells by Subtracting Hybridization

    NASA Astrophysics Data System (ADS)

    Zhang, Shu; Dai, Zhong-quan; Wang, Bing; Cao, Xin-sheng; Li, Ying-hui; Sun, Xi-qing

    2008-06-01

    Background The purpose of the present study was to clone the gravity and shear stress related genes from osteoblast-like human osteosarcoma MG-63 cells by subtractive hybridization. Method MG-63 cells were divided into two groups (1G group and simulated microgravity group). After cultured for 60 h in two different gravitational environments, two groups of MG-63 cells were treated with 1.5Pa fluid shear stress (FSS) for 60 min, respectively. The total RNA in cells was isolated. The gravity and shear stress related genes were cloned by subtractive hybridization. Result 200 clones were gained. 30 positive clones were selected using PCR method based on the primers of vector and sequenced. The obtained sequences were analyzed by blast. changes of 17 sequences were confirmed by RT-PCR and these genes are related to cell proliferation, cell differentiation, protein synthesis, signal transduction and apoptosis. 5 unknown genes related to gravity and shear stress were found. Conclusion In this part of our study, our result indicates that simulated microgravity may change the activities of MG-63 cells by inducing the functional alterations of specific genes.

  13. Implementation and Testing of Turbulence Models for the F18-HARV Simulation

    NASA Technical Reports Server (NTRS)

    Yeager, Jessie C.

    1998-01-01

    This report presents three methods of implementing the Dryden power spectral density model for atmospheric turbulence. Included are the equations which define the three methods and computer source code written in Advanced Continuous Simulation Language to implement the equations. Time-history plots and sample statistics of simulated turbulence results from executing the code in a test program are also presented. Power spectral densities were computed for sample sequences of turbulence and are plotted for comparison with the Dryden spectra. The three model implementations were installed in a nonlinear six-degree-of-freedom simulation of the High Alpha Research Vehicle airplane. Aircraft simulation responses to turbulence generated with the three implementations are presented as plots.

  14. Draft Genome Sequences of Six Mycobacterium immunogenum, Strains Obtained from a Chloraminated Drinking Water Distribution System Simulator

    EPA Science Inventory

    We report the draft genome sequences of six Mycobacterium immunogenum isolated from a chloraminated drinking water distribution system simulator subjected to changes in operational parameters. M. immunogenum, a rapidly growing mycobacteria previously reported as the cause of hyp...

  15. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    PubMed

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  16. Optimal word sizes for dissimilarity measures and estimation of the degree of dissimilarity between DNA sequences.

    PubMed

    Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An

    2005-11-15

    Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu

  17. A mechanistic insight into the amyloidogenic structure of hIAPP peptide revealed from sequence analysis and molecular dynamics simulation.

    PubMed

    Chakraborty, Sandipan; Chatterjee, Barnali; Basu, Soumalee

    2012-07-01

    A collective approach of sequence analysis, phylogenetic tree and in silico prediction of amyloidogenecity using bioinformatics tools have been used to correlate the observed species-specific variations in IAPP sequences with the amyloid forming propensity. Observed substitution patterns indicate that probable changes in local hydrophobicity are instrumental in altering the aggregation propensity of the peptide. In particular, residues at 17th, 22nd and 23rd positions of the IAPP peptide are found to be crucial for amyloid formation. Proline25 primarily dictates the observed non-amyloidogenecity in rodents. Furthermore, extensive molecular dynamics simulation of 0.24 μs have been carried out with human IAPP (hIAPP) fragment 19-27, the portion showing maximum sequence variation across different species, to understand the native folding characteristic of this region. Principal component analysis in combination with free energy landscape analysis illustrates a four residue turn spanning from residue 22 to 25. The results provide a structural insight into the intramolecular β-sheet structure of amylin which probably is the template for nucleation of fibril formation and growth, a pathogenic feature of type II diabetes. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Field scale test of multi-dimensional flow and morphodynamic simulations used for restoration design analysis

    USGS Publications Warehouse

    McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.

  19. Structurally detailed coarse-grained model for Sec-facilitated co-translational protein translocation and membrane integration

    PubMed Central

    Miller, Thomas F.

    2017-01-01

    We present a coarse-grained simulation model that is capable of simulating the minute-timescale dynamics of protein translocation and membrane integration via the Sec translocon, while retaining sufficient chemical and structural detail to capture many of the sequence-specific interactions that drive these processes. The model includes accurate geometric representations of the ribosome and Sec translocon, obtained directly from experimental structures, and interactions parameterized from nearly 200 μs of residue-based coarse-grained molecular dynamics simulations. A protocol for mapping amino-acid sequences to coarse-grained beads enables the direct simulation of trajectories for the co-translational insertion of arbitrary polypeptide sequences into the Sec translocon. The model reproduces experimentally observed features of membrane protein integration, including the efficiency with which polypeptide domains integrate into the membrane, the variation in integration efficiency upon single amino-acid mutations, and the orientation of transmembrane domains. The central advantage of the model is that it connects sequence-level protein features to biological observables and timescales, enabling direct simulation for the mechanistic analysis of co-translational integration and for the engineering of membrane proteins with enhanced membrane integration efficiency. PMID:28328943

  20. First results from the IllustrisTNG simulations: the galaxy colour bimodality

    NASA Astrophysics Data System (ADS)

    Nelson, Dylan; Pillepich, Annalisa; Springel, Volker; Weinberger, Rainer; Hernquist, Lars; Pakmor, Rüdiger; Genel, Shy; Torrey, Paul; Vogelsberger, Mark; Kauffmann, Guinevere; Marinacci, Federico; Naiman, Jill

    2018-03-01

    We introduce the first two simulations of the IllustrisTNG project, a next generation of cosmological magnetohydrodynamical simulations, focusing on the optical colours of galaxies. We explore TNG100, a rerun of the original Illustris box, and TNG300, which includes 2 × 25003 resolution elements in a volume 20 times larger. Here, we present first results on the galaxy colour bimodality at low redshift. Accounting for the attenuation of stellar light by dust, we compare the simulated (g - r) colours of 109 < M⋆/M⊙ < 1012.5 galaxies to the observed distribution from the Sloan Digital Sky Survey. We find a striking improvement with respect to the original Illustris simulation, as well as excellent quantitative agreement with the observations, with a sharp transition in median colour from blue to red at a characteristic M⋆ ˜ 1010.5 M⊙. Investigating the build-up of the colour-mass plane and the formation of the red sequence, we demonstrate that the primary driver of galaxy colour transition is supermassive black hole feedback in its low accretion state. Across the entire population the median colour transition time-scale Δtgreen is ˜1.6 Gyr, a value which drops for increasingly massive galaxies. We find signatures of the physical process of quenching: at fixed stellar mass, redder galaxies have lower star formation rates, gas fractions, and gas metallicities; their stellar populations are also older and their large-scale interstellar magnetic fields weaker than in bluer galaxies. Finally, we measure the amount of stellar mass growth on the red sequence. Galaxies with M⋆ > 1011 M⊙ which redden at z < 1 accumulate on average ˜25 per cent of their final z = 0 mass post-reddening; at the same time, ˜18 per cent of such massive galaxies acquire half or more of their final stellar mass while on the red sequence.

  1. Sub-Terrahertz Spectroscopy of E.COLI Dna: Experiment, Statistical Model, and MD Simulations

    NASA Astrophysics Data System (ADS)

    Sizov, I.; Dorofeeva, T.; Khromova, T.; Gelmont, B.; Globus, T.

    2012-06-01

    We will present result of combined experimental and computational study of sub-THz absorption spectra from Escherichia coli (E.coli) DNA. Measurements were conducted using a Bruker FTIR spectrometer with a liquid helium cooled bolometer and a recently developed frequency domain sensor operating at room temperature, with spectral resolution of 0.25 cm-1 and 0.03 cm-1, correspondingly. We have earlier demonstrated that molecular dynamics (MD) simulation can be effectively applied for characterizing relatively small biological molecules, such as transfer RNA or small protein thioredoxin from E. coli , and help to understand and predict their absorption spectra. Large size of DNA macromolecules ( 5 million base pairs for E. coli DNA) prevents, however, direct application of MD simulation at the current level of computational capabilities. Therefore, by applying a second order Markov chain approach and Monte-Carlo technique, we have developed a new statistical model to construct DNA sequences from biological cells. These short representative sequences (20-60 base pairs) are built upon the most frequently repeated fragments (2-10 base pairs) in the original DNA. Using this new approach, we constructed DNA sequences for several non-pathogenic strains of E.coli, including a well-known strain BL21, uro-pathogenic strain, CFT073, and deadly EDL933 strain (O157:H7), and used MD simulations to calculate vibrational absorption spectra of these strains. Significant differences are clearly present in spectra of strains in averaged spectra and in all components for particular orientations. The mechanism of interaction of THz radiation with a biological molecule is studied by analyzing dynamics of atoms and correlation of local vibrations in the modeled molecule. Simulated THz vibrational spectra of DNA are compared with experimental results. With the spectral resolution of 0.1 cm-1 or better, which is now available in experiments, the very easy discrimination between different strains of the same bacteria becomes possible.

  2. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    PubMed

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.

  3. Single variant and multi-variant trend tests for genetic association with next generation sequencing that are robust to sequencing error

    PubMed Central

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Andrew; Musolf, Anthony; Matise, Tara C.; Finch, Stephen J.; Gordon, Derek

    2013-01-01

    As with any new technology, next generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model, based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to that data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p-value, no matter how many loci. PMID:23594495

  4. Evaluating a Novel Instructional Sequence for Conceptual Change in Physics Using Interactive Simulations

    ERIC Educational Resources Information Center

    Fan, Xinxin; Geelan, David; Gillies, Robyn

    2018-01-01

    This study investigated the effectiveness of a novel inquiry-based instructional sequence using interactive simulations for supporting students' development of conceptual understanding, inquiry process skills and confidence in learning. The study, conducted in Beijing, involved two teachers and 117 students in four classes. The teachers…

  5. Gastrointestinal Endogenous Proteins as a Source of Bioactive Peptides - An In Silico Study

    PubMed Central

    Dave, Lakshmi A.; Montoya, Carlos A.; Rutherfurd, Shane M.; Moughan, Paul J.

    2014-01-01

    Dietary proteins are known to contain bioactive peptides that are released during digestion. Endogenous proteins secreted into the gastrointestinal tract represent a quantitatively greater supply of protein to the gut lumen than those of dietary origin. Many of these endogenous proteins are digested in the gastrointestinal tract but the possibility that these are also a source of bioactive peptides has not been considered. An in silico prediction method was used to test if bioactive peptides could be derived from the gastrointestinal digestion of gut endogenous proteins. Twenty six gut endogenous proteins and seven dietary proteins were evaluated. The peptides present after gastric and intestinal digestion were predicted based on the amino acid sequence of the proteins and the known specificities of the major gastrointestinal proteases. The predicted resultant peptides possessing amino acid sequences identical to those of known bioactive peptides were identified. After gastrointestinal digestion (based on the in silico simulation), the total number of bioactive peptides predicted to be released ranged from 1 (gliadin) to 55 (myosin) for the selected dietary proteins and from 1 (secretin) to 39 (mucin-5AC) for the selected gut endogenous proteins. Within the intact proteins and after simulated gastrointestinal digestion, angiotensin converting enzyme (ACE)-inhibitory peptide sequences were the most frequently observed in both the dietary and endogenous proteins. Among the dietary proteins, after in silico simulated gastrointestinal digestion, myosin was found to have the highest number of ACE-inhibitory peptide sequences (49 peptides), while for the gut endogenous proteins, mucin-5AC had the greatest number of ACE-inhibitory peptide sequences (38 peptides). Gut endogenous proteins may be an important source of bioactive peptides in the gut particularly since gut endogenous proteins represent a quantitatively large and consistent source of protein. PMID:24901416

  6. Unbalanced voltage control of virtual synchronous generator in isolated micro-grid

    NASA Astrophysics Data System (ADS)

    Cao, Y. Z.; Wang, H. N.; Chen, B.

    2017-06-01

    Virtual synchronous generator (VSG) control is recommended to stabilize the voltage and frequency in isolated micro-grid. However, common VSG control is challenged by widely used unbalance loads, and the linked unbalance voltage problem worsens the power quality of the micro-grid. In this paper, the mathematical model of VSG was presented. Based on the analysis of positive- and negative-sequence equivalent circuit of VSG, an approach was proposed to eliminate the negative-sequence voltage of VSG with unbalance loads. Delay cancellation method and PI controller were utilized to identify and suppress the negative-sequence voltages. Simulation results verify the feasibility of proposed control strategy.

  7. Zadoff-Chu sequence-based hitless ranging scheme for OFDMA-PON configured 5G fronthaul uplinks

    NASA Astrophysics Data System (ADS)

    Reza, Ahmed Galib; Rhee, June-Koo Kevin

    2017-05-01

    A Zadoff-Chu (ZC) sequence-based low-complexity hitless upstream time synchronization scheme is proposed for an orthogonal frequency division multiple access passive optical network configured cloud radio access network fronthaul. The algorithm is based on gradual loading of the ZC sequences, where the phase discontinuity due to the cyclic prefix is alleviated by a frequency domain phase precoder, eliminating the requirements of guard bands to mitigate intersymbol interference and inter-carrier interference. Simulation results for uncontrolled-wavelength asynchronous transmissions from four concurrent transmitting optical network units are presented to demonstrate the effectiveness of the proposed scheme.

  8. ddClone: joint statistical inference of clonal populations from single cell and bulk tumour sequencing data.

    PubMed

    Salehi, Sohrab; Steif, Adi; Roth, Andrew; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P

    2017-03-01

    Next-generation sequencing (NGS) of bulk tumour tissue can identify constituent cell populations in cancers and measure their abundance. This requires computational deconvolution of allelic counts from somatic mutations, which may be incapable of fully resolving the underlying population structure. Single cell sequencing (SCS) is a more direct method, although its replacement of NGS is impeded by technical noise and sampling limitations. We propose ddClone, which analytically integrates NGS and SCS data, leveraging their complementary attributes through joint statistical inference. We show on real and simulated datasets that ddClone produces more accurate results than can be achieved by either method alone.

  9. Human Resource Scheduling in Performing a Sequence of Discrete Responses

    DTIC Science & Technology

    2009-02-28

    each is a graph comparing simulated results of each respective model with data from Experiment 3b. As described below the parameters of the model...initiated in parallel with ongoing Central operations on another. To fix model parameters we estimated the range of times to perform the sum of the...standard deviation for each parameter was set to 50% of mean value. Initial simulations found no meaningful differences between setting the standard

  10. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  11. Competing Pathways and Multiple Folding Nuclei in a Large Multidomain Protein, Luciferase.

    PubMed

    Scholl, Zackary N; Yang, Weitao; Marszalek, Piotr E

    2017-05-09

    Proteins obtain their final functional configuration through incremental folding with many intermediate steps in the folding pathway. If known, these intermediate steps could be valuable new targets for designing therapeutics and the sequence of events could elucidate the mechanism of refolding. However, determining these intermediate steps is hardly an easy feat, and has been elusive for most proteins, especially large, multidomain proteins. Here, we effectively map part of the folding pathway for the model large multidomain protein, Luciferase, by combining single-molecule force-spectroscopy experiments and coarse-grained simulation. Single-molecule refolding experiments reveal the initial nucleation of folding while simulations corroborate these stable core structures of Luciferase, and indicate the relative propensities for each to propagate to the final folded native state. Both experimental refolding and Monte Carlo simulations of Markov state models generated from simulation reveal that Luciferase most often folds along a pathway originating from the nucleation of the N-terminal domain, and that this pathway is the least likely to form nonnative structures. We then engineer truncated variants of Luciferase whose sequences corresponded to the putative structure from simulation and we use atomic force spectroscopy to determine their unfolding and stability. These experimental results corroborate the structures predicted from the folding simulation and strongly suggest that they are intermediates along the folding pathway. Taken together, our results suggest that initial Luciferase refolding occurs along a vectorial pathway and also suggest a mechanism that chaperones may exploit to prevent misfolding. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. RIEMS: a software pipeline for sensitive and comprehensive taxonomic classification of reads from metagenomics datasets.

    PubMed

    Scheuch, Matthias; Höper, Dirk; Beer, Martin

    2015-03-03

    Fuelled by the advent and subsequent development of next generation sequencing technologies, metagenomics became a powerful tool for the analysis of microbial communities both scientifically and diagnostically. The biggest challenge is the extraction of relevant information from the huge sequence datasets generated for metagenomics studies. Although a plethora of tools are available, data analysis is still a bottleneck. To overcome the bottleneck of data analysis, we developed an automated computational workflow called RIEMS - Reliable Information Extraction from Metagenomic Sequence datasets. RIEMS assigns every individual read sequence within a dataset taxonomically by cascading different sequence analyses with decreasing stringency of the assignments using various software applications. After completion of the analyses, the results are summarised in a clearly structured result protocol organised taxonomically. The high accuracy and performance of RIEMS analyses were proven in comparison with other tools for metagenomics data analysis using simulated sequencing read datasets. RIEMS has the potential to fill the gap that still exists with regard to data analysis for metagenomics studies. The usefulness and power of RIEMS for the analysis of genuine sequencing datasets was demonstrated with an early version of RIEMS in 2011 when it was used to detect the orthobunyavirus sequences leading to the discovery of Schmallenberg virus.

  13. Effects of 16S rDNA sampling on estimates of the number of endosymbiont lineages in sucking lice

    PubMed Central

    Burleigh, J. Gordon; Light, Jessica E.; Reed, David L.

    2016-01-01

    Phylogenetic trees can reveal the origins of endosymbiotic lineages of bacteria and detect patterns of co-evolution with their hosts. Although taxon sampling can greatly affect phylogenetic and co-evolutionary inference, most hypotheses of endosymbiont relationships are based on few available bacterial sequences. Here we examined how different sampling strategies of Gammaproteobacteria sequences affect estimates of the number of endosymbiont lineages in parasitic sucking lice (Insecta: Phthirapatera: Anoplura). We estimated the number of louse endosymbiont lineages using both newly obtained and previously sequenced 16S rDNA bacterial sequences and more than 42,000 16S rDNA sequences from other Gammaproteobacteria. We also performed parametric and nonparametric bootstrapping experiments to examine the effects of phylogenetic error and uncertainty on these estimates. Sampling of 16S rDNA sequences affects the estimates of endosymbiont diversity in sucking lice until we reach a threshold of genetic diversity, the size of which depends on the sampling strategy. Sampling by maximizing the diversity of 16S rDNA sequences is more efficient than randomly sampling available 16S rDNA sequences. Although simulation results validate estimates of multiple endosymbiont lineages in sucking lice, the bootstrap results suggest that the precise number of endosymbiont origins is still uncertain. PMID:27547523

  14. [Effect of simulated inorganic anion leaching solution of electroplating sludge on the bioactivity of Acidithiobacillus ferrooxidans].

    PubMed

    Chen, Yan; Huang, Fang; Xie, Xin-Yuan

    2014-04-01

    An Acidithiobacillus ferrooxidans strain WZ-1 (GenBank sequence number: JQ968461) was used as the research object. The effects of Cl-, NO3-, F- and 4 kinds of simulated inorganic anions leaching solutions of electroplating sludge on the bioactivity of Fe2+ oxidation and apparent respiratory rate of WZ-1 were investigated. The results showed that Cl-, NO3(-)- didn't have any influence on the bioactivity of WZ-1 at concentrations of 5.0 g x L(-1), 1.0 g x L(-1), respectively. WZ-1 showed tolerance to high levels of Cl- and NO3- (about 10.0 g x L(-1), 5.0 g x L(-1), respectively), but it had lower tolerance to F- (25 mg x L(-1)). Different kinds of simulated inorganic anions leaching solutions of electroplating sludge had significant differences in terms of their effects on bioactivity of WZ-1 with a sequence of Cl-/NO3(-)/F(-) > or = NO3(-)/F(-) > Cl-/F(-) > Cl(-)/NO3(-).

  15. Evolutionary Dynamics and Diversity in Microbial Populations

    NASA Astrophysics Data System (ADS)

    Thompson, Joel; Fisher, Daniel

    2013-03-01

    Diseases such as flu and cancer adapt at an astonishing rate. In large part, viruses and cancers are so difficult to prevent because they are continually evolving. Controlling such ``evolutionary diseases'' requires a better understanding of the underlying evolutionary dynamics. It is conventionally assumed that adaptive mutations are rare and therefore will occur and sweep through the population in succession. Recent experiments using modern sequencing technologies have illuminated the many ways in which real population sequence data does not conform to the predictions of conventional theory. We consider a very simple model of asexual evolution and perform simulations in a range of parameters thought to be relevant for microbes and cancer. Simulation results reveal complex evolutionary dynamics typified by competition between lineages with different sets of adaptive mutations. This dynamical process leads to a distribution of mutant gene frequencies different than expected under the conventional assumption that adaptive mutations are rare. Simulated gene frequencies share several conspicuous features with data collected from laboratory-evolved yeast and the worldwide population of influenza.

  16. Self-sequencing of amino acids and origins of polyfunctional protocells

    NASA Technical Reports Server (NTRS)

    Fox, S. W.

    1984-01-01

    The role of proteins in the origin of living things is discussed. It has been experimentally established that amino acids can sequence themselves under simulated geological conditions with highly nonrandom products which accordingly contain diverse information. Multiple copies of each type of macromolecule are formed, resulting in greater power for any protoenzymic molecule than would accrue from a single copy of each type. Thermal proteins are readily incorporated into laboratory protocells. The experimental evidence for original polyfunctional protocells is discussed.

  17. Evaluating methods to visualize patterns of genetic differentiation on a landscape.

    PubMed

    House, Geoffrey L; Hahn, Matthew W

    2018-05-01

    With advances in sequencing technology, research in the field of landscape genetics can now be conducted at unprecedented spatial and genomic scales. This has been especially evident when using sequence data to visualize patterns of genetic differentiation across a landscape due to demographic history, including changes in migration. Two recent model-based visualization methods that can highlight unusual patterns of genetic differentiation across a landscape, SpaceMix and EEMS, are increasingly used. While SpaceMix's model can infer long-distance migration, EEMS' model is more sensitive to short-distance changes in genetic differentiation, and it is unclear how these differences may affect their results in various situations. Here, we compare SpaceMix and EEMS side by side using landscape genetics simulations representing different migration scenarios. While both methods excel when patterns of simulated migration closely match their underlying models, they can produce either un-intuitive or misleading results when the simulated migration patterns match their models less well, and this may be difficult to assess in empirical data sets. We also introduce unbundled principal components (un-PC), a fast, model-free method to visualize patterns of genetic differentiation by combining principal components analysis (PCA), which is already used in many landscape genetics studies, with the locations of sampled individuals. Un-PC has characteristics of both SpaceMix and EEMS and works well with simulated and empirical data. Finally, we introduce msLandscape, a collection of tools that streamline the creation of customizable landscape-scale simulations using the popular coalescent simulator ms and conversion of the simulated data for use with un-PC, SpaceMix and EEMS. © 2017 John Wiley & Sons Ltd.

  18. Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.

    PubMed

    Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin

    2014-02-01

    Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Predicting the tolerated sequences for proteins and protein interfaces using RosettaBackrub flexible backbone design.

    PubMed

    Smith, Colin A; Kortemme, Tanja

    2011-01-01

    Predicting the set of sequences that are tolerated by a protein or protein interface, while maintaining a desired function, is useful for characterizing protein interaction specificity and for computationally designing sequence libraries to engineer proteins with new functions. Here we provide a general method, a detailed set of protocols, and several benchmarks and analyses for estimating tolerated sequences using flexible backbone protein design implemented in the Rosetta molecular modeling software suite. The input to the method is at least one experimentally determined three-dimensional protein structure or high-quality model. The starting structure(s) are expanded or refined into a conformational ensemble using Monte Carlo simulations consisting of backrub backbone and side chain moves in Rosetta. The method then uses a combination of simulated annealing and genetic algorithm optimization methods to enrich for low-energy sequences for the individual members of the ensemble. To emphasize certain functional requirements (e.g. forming a binding interface), interactions between and within parts of the structure (e.g. domains) can be reweighted in the scoring function. Results from each backbone structure are merged together to create a single estimate for the tolerated sequence space. We provide an extensive description of the protocol and its parameters, all source code, example analysis scripts and three tests applying this method to finding sequences predicted to stabilize proteins or protein interfaces. The generality of this method makes many other applications possible, for example stabilizing interactions with small molecules, DNA, or RNA. Through the use of within-domain reweighting and/or multistate design, it may also be possible to use this method to find sequences that stabilize particular protein conformations or binding interactions over others.

  20. An efficient and scalable graph modeling approach for capturing information at different levels in next generation sequencing reads

    PubMed Central

    2013-01-01

    Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333

  1. Quasi-Dynamic Versus Fully-Dynamic Simulations of Slip Accumulation on Faults with Enhanced Dynamic Weakening

    NASA Astrophysics Data System (ADS)

    Lapusta, N.; Thomas, M.; Noda, H.; Avouac, J.

    2012-12-01

    Long-term simulations that incorporate both seismic events and aseismic slip are quite important for studies of earthquake physics but challenging computationally. To study long deformation histories, most simulation methods do not incorporate full inertial effects (wave propagation) during simulated earthquakes, using quasi-dynamic approximations instead. Here we compare the results of quasi-dynamic simulations to the fully dynamic ones for a range of problems to determine the applicability of the quasi-dynamic approach. Intuitively, the quasi-dynamic approach should do relatively well in problems where wave-mediated effects are relatively simple but should have substantially different (and hence wrong) response when the wave-mediated stress transfers dominate the character of the seismic events. This is exactly what we observe in our simulations. We consider a 2D model of a rate-and-state fault with a seismogenic (steady-state velocity-weakening) zone surrounded by creeping (steady-state velocity-strengthening) areas. If the seismogenic zone is described by the standard Dieterich-Ruina rate-and-state friction, the resulting earthquake sequences consist of relatively simple crack-like ruptures, and the inclusion of true wave-propagation effects mostly serves to concentrate stress more efficiently at the rupture front. Hence, in such models, rupture speeds and slip rates are significantly (several times) lower in the quasi-dynamic simulations compared to the fully dynamic ones, but the total slip, the crack-like nature of seismic events, and the overall pattern of earthquake sequences is comparable, consistently with prior studies. Such behavior can be classified as qualitatively similar but quantitatively different, and it motivates the popularity of the quasi-dynamic methods in simulations. However, the comparison changes dramatically once we consider a model with enhanced dynamic weakening in the seismogenic zone in the form of flash heating. In this case, the fully dynamic simulations produce seismic ruptures in the form of short-duration slip pulses, where the pulses form due to a combination of enhanced weakening and wave effects. The quasi-dynamic simulations in the same model produce completely different results, with large crack-like ruptures, different total slips, different rupture patterns, and different prestress state before large, model-spanning events. Such qualitative differences between the quasi-dynamic and fully-dynamic simulation should result in any model where inertial effects lead to qualitative differences, such as cases with supershear transition or fault with different materials on the two sides. We will present results on our current work on how the quasi-dynamic and fully dynamic simulations compare for the cases with heterogeneous fault properties.

  2. A Simulation of DNA Sequencing Utilizing 3M Post-It[R] Notes

    ERIC Educational Resources Information Center

    Christensen, Doug

    2009-01-01

    An inexpensive and equipment free approach to teaching the technical aspects of DNA sequencing. The activity described requires an instructor with a familiarity of DNA sequencing technology but provides a straight forward method of teaching the technical aspects of sequencing in the absence of expensive sequencing equipment. The final sequence…

  3. Inter-level Scaffolding and Sequences of Representational Activities in Teaching a Chemical System with Graphical Simulations

    NASA Astrophysics Data System (ADS)

    Li, Na; Black, John B.

    2016-10-01

    Chemistry knowledge can be represented at macro-, micro- and symbolic levels, and learning a chemistry topic requires students to engage in multiple representational activities. This study focused on scaffolding for inter-level connection-making in learning chemistry knowledge with graphical simulations. We also tested whether different sequences of representational activities produced different student learning outcomes in learning a chemistry topic. A sample of 129 seventh graders participated in this study. In a simulation-based environment, participants completed three representational activities to learn several ideal gas law concepts. We conducted a 2 × 3 factorial design experiment. We compared two scaffolding conditions: (1) the inter- level scaffolding condition in which participants received inter-level questions and experienced the dynamic link function in the simulation-based environment and (2) the intra- level scaffolding condition in which participants received intra-level questions and did not experience the dynamic link function. We also compared three different sequences of representational activities: macro-symbolic-micro, micro-symbolic-macro and symbolic-micro-macro. For the scaffolding variable, we found that the inter- level scaffolding condition produced significantly better performance in both knowledge comprehension and application, compared to the intra- level scaffolding condition. For the sequence variable, we found that the macro-symbolic-micro sequence produced significantly better knowledge comprehension performance than the other two sequences; however, it did not benefit knowledge application performance. There was a trend that the treatment group who experienced inter- level scaffolding and the micro-symbolic-macro sequence achieved the best knowledge application performance.

  4. Selecting sequence variants to improve genomic predictions for dairy cattle

    USDA-ARS?s Scientific Manuscript database

    Millions of genetic variants have been identified by population-scale sequencing projects, but subsets are needed for routine genomic predictions or to include on genotyping arrays. Methods of selecting sequence variants were compared using both simulated sequence genotypes and actual data from run ...

  5. nbCNV: a multi-constrained optimization model for discovering copy number variants in single-cell sequencing data.

    PubMed

    Zhang, Changsheng; Cai, Hongmin; Huang, Jingying; Song, Yan

    2016-09-17

    Variations in DNA copy number have an important contribution to the development of several diseases, including autism, schizophrenia and cancer. Single-cell sequencing technology allows the dissection of genomic heterogeneity at the single-cell level, thereby providing important evolutionary information about cancer cells. In contrast to traditional bulk sequencing, single-cell sequencing requires the amplification of the whole genome of a single cell to accumulate enough samples for sequencing. However, the amplification process inevitably introduces amplification bias, resulting in an over-dispersing portion of the sequencing data. Recent study has manifested that the over-dispersed portion of the single-cell sequencing data could be well modelled by negative binomial distributions. We developed a read-depth based method, nbCNV to detect the copy number variants (CNVs). The nbCNV method uses two constraints-sparsity and smoothness to fit the CNV patterns under the assumption that the read signals are negatively binomially distributed. The problem of CNV detection was formulated as a quadratic optimization problem, and was solved by an efficient numerical solution based on the classical alternating direction minimization method. Extensive experiments to compare nbCNV with existing benchmark models were conducted on both simulated data and empirical single-cell sequencing data. The results of those experiments demonstrate that nbCNV achieves superior performance and high robustness for the detection of CNVs in single-cell sequencing data.

  6. Simulating protein folding initiation sites using an alpha-carbon-only knowledge-based force field

    PubMed Central

    Buck, Patrick M.; Bystroff, Christopher

    2015-01-01

    Protein folding is a hierarchical process where structure forms locally first, then globally. Some short sequence segments initiate folding through strong structural preferences that are independent of their three-dimensional context in proteins. We have constructed a knowledge-based force field in which the energy functions are conditional on local sequence patterns, as expressed in the hidden Markov model for local structure (HMMSTR). Carbon-alpha force field (CALF) builds sequence specific statistical potentials based on database frequencies for α-carbon virtual bond opening and dihedral angles, pairwise contacts and hydrogen bond donor-acceptor pairs, and simulates folding via Brownian dynamics. We introduce hydrogen bond donor and acceptor potentials as α-carbon probability fields that are conditional on the predicted local sequence. Constant temperature simulations were carried out using 27 peptides selected as putative folding initiation sites, each 12 residues in length, representing several different local structure motifs. Each 0.6 μs trajectory was clustered based on structure. Simulation convergence or representativeness was assessed by subdividing trajectories and comparing clusters. For 21 of the 27 sequences, the largest cluster made up more than half of the total trajectory. Of these 21 sequences, 14 had cluster centers that were at most 2.6 Å root mean square deviation (RMSD) from their native structure in the corresponding full-length protein. To assess the adequacy of the energy function on nonlocal interactions, 11 full length native structures were relaxed using Brownian dynamics simulations. Equilibrated structures deviated from their native states but retained their overall topology and compactness. A simple potential that folds proteins locally and stabilizes proteins globally may enable a more realistic understanding of hierarchical folding pathways. PMID:19137613

  7. Molecular Structure and Sequence in Complex Coacervates

    NASA Astrophysics Data System (ADS)

    Sing, Charles; Lytle, Tyler; Madinya, Jason; Radhakrishna, Mithun

    Oppositely-charged polyelectrolytes in aqueous solution can undergo associative phase separation, in a process known as complex coacervation. This results in a polyelectrolyte-dense phase (coacervate) and polyelectrolyte-dilute phase (supernatant). There remain challenges in understanding this process, despite a long history in polymer physics. We use Monte Carlo simulation to demonstrate that molecular features (charge spacing, size) play a crucial role in governing the equilibrium in coacervates. We show how these molecular features give rise to strong monomer sequence effects, due to a combination of counterion condensation and correlation effects. We distinguish between structural and sequence-based correlations, which can be designed to tune the phase diagram of coacervation. Sequence effects further inform the physical understanding of coacervation, and provide the basis for new coacervation models that take monomer-level features into account.

  8. Rényi continuous entropy of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2004-12-07

    Entropy measures of DNA sequences estimate their randomness or, inversely, their repeatability. L-block Shannon discrete entropy accounts for the empirical distribution of all length-L words and has convergence problems for finite sequences. A new entropy measure that extends Shannon's formalism is proposed. Renyi's quadratic entropy calculated with Parzen window density estimation method applied to CGR/USM continuous maps of DNA sequences constitute a novel technique to evaluate sequence global randomness without some of the former method drawbacks. The asymptotic behaviour of this new measure was analytically deduced and the calculation of entropies for several synthetic and experimental biological sequences was performed. The results obtained were compared with the distributions of the null model of randomness obtained by simulation. The biological sequences have shown a different p-value according to the kernel resolution of Parzen's method, which might indicate an unknown level of organization of their patterns. This new technique can be very useful in the study of DNA sequence complexity and provide additional tools for DNA entropy estimation. The main MATLAB applications developed and additional material are available at the webpage . Specialized functions can be obtained from the authors.

  9. IMM estimator with out-of-sequence measurements

    NASA Astrophysics Data System (ADS)

    Bar-Shalom, Yaakov; Chen, Huimin

    2004-08-01

    In multisensor tracking systems that operate in a centralized information processing architecture, measurements from the same target obtained by different sensors can arrive at the processing center out of sequence. In order to avoid either a delay in the output or the need for reordering and reprocessing an entire sequence of measurements, such measurements have to be processed as out-of-sequence measurements (OOSM). Recent work developed procedures for incorporating OOSMs into a Kalman filter (KF). Since the state of the art tracker for real (maneuvering) targets is the Interacting Multiple Model (IMM) estimator, this paper presents the algorithm for incorporating OOSMs into an IMM estimator. Both data association and estimation are considered. Simulation results are presented for two realistic problems using measurements from two airborne GMTI sensors. It is shown that the proposed algorithm for incorporating OOSMs into an IMM estimator yields practically the same performance as the reordering and in-sequence reprocessing of the measurements.

  10. Simulation of flexible appendage interactions with Mariner Venus/Mercury attitude control and science platform pointing

    NASA Technical Reports Server (NTRS)

    Fleischer, G. E.

    1973-01-01

    A new computer subroutine, which solves the attitude equations of motion for any vehicle idealized as a topological tree of hinge-connected rigid bodies, is used to simulate and analyze science instrument pointing control interaction with a flexible Mariner Venus/Mercury (MVM) spacecraft. The subroutine's user options include linearized or partially linearized hinge-connected models whose computational advantages are demonstrated for the MVM problem. Results of the pointing control/flexible vehicle interaction simulations, including imaging experiment pointing accuracy predictions and implications for MVM science sequence planning, are described in detail.

  11. The Importance of Time and Frequency Reference in Quantum Astronomy and Quantum Communications

    DTIC Science & Technology

    2007-11-01

    simulator, but the same general results are valid for optical fiber and also different quantum state transmission technologies (i.e. Entangled Photons ...protocols [6]). The Matlab simulation starts from a sequence of pulses of duration Ton; the number of photons per pulse has been implemented like a...astrophysical emission mechanisms or scattering processes by measuring the statistics of the arrival time of each incoming photon . This line of research will be

  12. Simulation of Stress-Strain State of Shovel Rotary Support Kingpin

    NASA Astrophysics Data System (ADS)

    Khoreshok, A. A.; Buyankin, P. V.; Vorobiev, A. V.; Dronov, A. A.

    2016-04-01

    The article presents the sequence of computational simulation of stress-strain state of shovel’s rotary support. Computation results are analyzed, the kingpin is specified as the most loaded element, maximum stress zones are identified. Kingpin design modification such as enhancement of fillet curvature radius to 25 mm and displacement of eyebolt holes on the diameter of 165 mm are proposed, thus diminishing impact of stress concentrators and improving reliability of the rotary support.

  13. Analysis of Sequence Data Under Multivariate Trait-Dependent Sampling.

    PubMed

    Tao, Ran; Zeng, Donglin; Franceschini, Nora; North, Kari E; Boerwinkle, Eric; Lin, Dan-Yu

    2015-06-01

    High-throughput DNA sequencing allows for the genotyping of common and rare variants for genetic association studies. At the present time and for the foreseeable future, it is not economically feasible to sequence all individuals in a large cohort. A cost-effective strategy is to sequence those individuals with extreme values of a quantitative trait. We consider the design under which the sampling depends on multiple quantitative traits. Under such trait-dependent sampling, standard linear regression analysis can result in bias of parameter estimation, inflation of type I error, and loss of power. We construct a likelihood function that properly reflects the sampling mechanism and utilizes all available data. We implement a computationally efficient EM algorithm and establish the theoretical properties of the resulting maximum likelihood estimators. Our methods can be used to perform separate inference on each trait or simultaneous inference on multiple traits. We pay special attention to gene-level association tests for rare variants. We demonstrate the superiority of the proposed methods over standard linear regression through extensive simulation studies. We provide applications to the Cohorts for Heart and Aging Research in Genomic Epidemiology Targeted Sequencing Study and the National Heart, Lung, and Blood Institute Exome Sequencing Project.

  14. Pyrosequencing analysis of the gyrB gene to differentiate bacteria responsible for diarrheal diseases.

    PubMed

    Hou, X-L; Cao, Q-Y; Jia, H-Y; Chen, Z

    2008-07-01

    Pathogens causing acute diarrhea include a large variety of species from Enterobacteriaceae and Vibrionaceae. A method based on pyrosequencing was used here to differentiate bacteria commonly associated with diarrhea in China; the method is targeted to a partial amplicon of the gyrB gene, which encodes the B subunit of DNA gyrase. Twenty-eight specific polymorphic positions were identified from sequence alignment of a large sequence dataset and targeted using 17 sequencing primers. Of 95 isolates tested, belonging to 13 species within 7 genera, most could be identified to the species level; O157 type could be differentiated from other E. coli types; Salmonella enterica subsp. enterica could be identified at the serotype level; the genus Shigella, except for S. boydii and S. dysenteriae, could also be identified. All these isolates were also subjected to conventional sequencing of a relatively long ( approximately1.2 kb) region of gyrB DNA; these results confirmed those with pyrosequencing. Twenty-two fecal samples were surveyed, the results of which were concordant with culture-based bacterial identification, and the pathogen detection limit with simulated stool specimens was 10(4) CFU/ml. DNA from different pathogens was also mixed to simulate a case of multibacterial infection, and the generated signals correlated well with the mix ratio. In summary, the gyrB-based pyrosequencing approach proved to have significant reliability and discriminatory power for enteropathogenic bacterial identification and provided a fast and effective method for clinical diagnosis.

  15. Isosteric And Non-Isosteric Base Pairs In RNA Motifs: Molecular Dynamics And Bioinformatics Study Of The Sarcin-Ricin Internal Loop

    PubMed Central

    Havrila, Marek; Réblová, Kamila; Zirbel, Craig L.; Leontis, Neocles B.; Šponer, Jiří

    2013-01-01

    The Sarcin-Ricin RNA motif (SR motif) is one of the most prominent recurrent RNA building blocks that occurs in many different RNA contexts and folds autonomously, i.e., in a context-independent manner. In this study, we combined bioinformatics analysis with explicit-solvent molecular dynamics (MD) simulations to better understand the relation between the RNA sequence and the evolutionary patterns of SR motif. SHAPE probing experiment was also performed to confirm fidelity of MD simulations. We identified 57 instances of the SR motif in a non-redundant subset of the RNA X-ray structure database and analyzed their basepairing, base-phosphate, and backbone-backbone interactions. We extracted sequences aligned to these instances from large ribosomal RNA alignments to determine frequency of occurrence for different sequence variants. We then used a simple scoring scheme based on isostericity to suggest 10 sequence variants with highly variable expected degree of compatibility with the SR motif 3D structure. We carried out MD simulations of SR motifs with these base substitutions. Non isosteric base substitutions led to unstable structures, but so did isosteric substitutions which were unable to make key base-phosphate interactions. MD technique explains why some potentially isosteric SR motifs are not realized during evolution. We also found that inability to form stable cWW geometry is an important factor in case of the first base pair of the flexible region of the SR motif. Comparison of structural, bioinformatics, SHAPE probing and MD simulation data reveals that explicit solvent MD simulations neatly reflect viability of different sequence variants of the SR motif. Thus, MD simulations can efficiently complement bioinformatics tools in studies of conservation patterns of RNA motifs and provide atomistic insight into the role of their different signature interactions. PMID:24144333

  16. Microsecond Simulations of DNA and Ion Transport in Nanopores with Novel Ion-Ion and Ion-Nucleotides Effective Potentials

    PubMed Central

    De Biase, Pablo M.; Markosyan, Suren; Noskov, Sergei

    2014-01-01

    We developed a novel scheme based on the Grand-Canonical Monte-Carlo/Brownian Dynamics (GCMC/BD) simulations and have extended it to studies of ion currents across three nanopores with the potential for ssDNA sequencing: solid-state nanopore Si3N4, α-hemolysin, and E111N/M113Y/K147N mutant. To describe nucleotide-specific ion dynamics compatible with ssDNA coarse-grained model, we used the Inverse Monte-Carlo protocol, which maps the relevant ion-nucleotide distribution functions from an all-atom MD simulations. Combined with the previously developed simulation platform for Brownian Dynamic (BD) simulations of ion transport, it allows for microsecond- and millisecond-long simulations of ssDNA dynamics in nanopore with a conductance computation accuracy that equals or exceeds that of all-atom MD simulations. In spite of the simplifications, the protocol produces results that agree with the results of previous studies on ion conductance across open channels and provide direct correlations with experimentally measured blockade currents and ion conductances that have been estimated from all-atom MD simulations. PMID:24738152

  17. Development of Ground Test System For RKX-200EB

    NASA Astrophysics Data System (ADS)

    Yudhi Irwanto, Herma

    2018-04-01

    After being postponed for seven years, the development of RKX-200EB now restarts by initiating a ground test, preceding the real flight test. The series of the development starts from simulation test using the real vehicle and its components, focusing on a flight sequence test using hardware in the loop simulation. The result of the simulation shows that the autonomous control system in development is able to control the X tail fin vehicle, since take off using booster, separating booster-sustainer, making flight maneuver using sustainer with average cruise speed of 1000 km/h, and doing bank to maneuver up to ±40 deg heading to the target. The simulation result also shows that the presence of sustainer in vehicle control can expand the distance range by 162% (12.6 km) from its ballistic range using only a booster.

  18. IgSimulator: a versatile immunosequencing simulator.

    PubMed

    Safonova, Yana; Lapidus, Alla; Lill, Jennie

    2015-10-01

    The recent introduction of next-generation sequencing technologies to antibody studies have resulted in a growing number of immunoinformatics tools for antibody repertoire analysis. However, benchmarking these newly emerging tools remains problematic since the gold standard datasets that are needed to validate these tools are typically not available. Since simulating antibody repertoires is often the only feasible way to benchmark new immunoinformatics tools, we developed the IgSimulator tool that addresses various complications in generating realistic antibody repertoires. IgSimulator's code has modular structure and can be easily adapted to new requirements to simulation. IgSimulator is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from yana-safonova.github.io/ig_simulator. safonova.yana@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Simulation of Electronic Circular Dichroism of Nucleic Acids: From the Structure to the Spectrum.

    PubMed

    Padula, Daniele; Jurinovich, Sandro; Di Bari, Lorenzo; Mennucci, Benedetta

    2016-11-14

    We present a quantum mechanical (QM) simulation of the electronic circular dichroism (ECD) of nucleic acids (NAs). The simulation combines classical molecular dynamics, to obtain the structure and its temperature-dependent fluctuations, with a QM excitonic model to determine the ECD. The excitonic model takes into account environmental effects through a polarizable embedding and uses a refined approach to calculate the electronic couplings in terms of full transition densities. Three NAs with either similar conformations but different base sequences or similar base sequences but different conformations have been investigated and the results were compared with experimental observations; a good agreement was seen in all cases. A detailed analysis of the nature of the ECD bands in terms of their excitonic composition was also carried out. Finally, a comparison between the QM and the DeVoe models clearly revealed the importance of including fluctuations of the excitonic parameters and of accurately determining the electronic couplings. This study demonstrates the feasibility of the ab initio simulation of the ECD spectra of NAs, that is, without the need of experimental structural or electronic data. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Student Recognition of Visual Affordances: Supporting Use of Physics Simulations in Whole Class and Small Group Settings

    ERIC Educational Resources Information Center

    Stephens, A. Lynn

    2012-01-01

    The purpose of this study is to investigate student interactions with simulations, and teacher support of those interactions, within naturalistic high school physics classroom settings. This study focuses on data from two lesson sequences that were conducted in several physics classrooms. The lesson sequences were conducted in a whole class…

  1. Whole-Genome Sequences of Four Strains Closely Related to Members of the Mycobacterium chelonae Group, Isolated from Biofilms in a Drinking Water Distribution System Simulator

    EPA Science Inventory

    We report the draft genome sequences of four Mycobacterium chelonae group strains from biofilms obtained after a ‘chlorine burn’ in a chloraminated drinking water distribution system simulator. These opportunistic pathogens have been detected in drinking and hospital water distr...

  2. Using Playing Cards to Simulate a Molecular Clock

    ERIC Educational Resources Information Center

    Westerling, Karin E.

    2008-01-01

    Changes in DNA base-repair may serve as an indicator of the time elapsed since divergence from a common ancestor. DNA sequences can now be analyzed. The simulation presented in this article allows students to observe the accumulation of changes in a randomly mutating sequence of playing cards. The cards are analogous to DNA nucleotide or protein…

  3. DUK - A Fast and Efficient Kmer Based Sequence Matching Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingkun; Copeland, Alex; Han, James

    2011-03-21

    A new tool, DUK, is developed to perform matching task. Matching is to find whether a query sequence partially or totally matches given reference sequences or not. Matching is similar to alignment. Indeed many traditional analysis tasks like contaminant removal use alignment tools. But for matching, there is no need to know which bases of a query sequence matches which position of a reference sequence, it only need know whether there exists a match or not. This subtle difference can make matching task much faster than alignment. DUK is accurate, versatile, fast, and has efficient memory usage. It uses Kmermore » hashing method to index reference sequences and Poisson model to calculate p-value. DUK is carefully implemented in C++ in object oriented design. The resulted classes can also be used to develop other tools quickly. DUK have been widely used in JGI for a wide range of applications such as contaminant removal, organelle genome separation, and assembly refinement. Many real applications and simulated dataset demonstrate its power.« less

  4. EggLib: processing, analysis and simulation tools for population genetics and genomics

    PubMed Central

    2012-01-01

    Background With the considerable growth of available nucleotide sequence data over the last decade, integrated and flexible analytical tools have become a necessity. In particular, in the field of population genetics, there is a strong need for automated and reliable procedures to conduct repeatable and rapid polymorphism analyses, coalescent simulations, data manipulation and estimation of demographic parameters under a variety of scenarios. Results In this context, we present EggLib (Evolutionary Genetics and Genomics Library), a flexible and powerful C++/Python software package providing efficient and easy to use computational tools for sequence data management and extensive population genetic analyses on nucleotide sequence data. EggLib is a multifaceted project involving several integrated modules: an underlying computationally efficient C++ library (which can be used independently in pure C++ applications); two C++ programs; a Python package providing, among other features, a high level Python interface to the C++ library; and the egglib script which provides direct access to pre-programmed Python applications. Conclusions EggLib has been designed aiming to be both efficient and easy to use. A wide array of methods are implemented, including file format conversion, sequence alignment edition, coalescent simulations, neutrality tests and estimation of demographic parameters by Approximate Bayesian Computation (ABC). Classes implementing different demographic scenarios for ABC analyses can easily be developed by the user and included to the package. EggLib source code is distributed freely under the GNU General Public License (GPL) from its website http://egglib.sourceforge.net/ where a full documentation and a manual can also be found and downloaded. PMID:22494792

  5. OMV: A simplified mathematical model of the orbital maneuvering vehicle

    NASA Technical Reports Server (NTRS)

    Teoh, W.

    1984-01-01

    A model of the orbital maneuvering vehicle (OMV) is presented which contains several simplications. A set of hand controller signals may be used to control the motion of the OMV. Model verification is carried out using a sequence of tests. The dynamic variables generated by the model are compared, whenever possible, with the corresponding analytical variables. The results of the tests show conclusively that the present model is behaving correctly. Further, this model interfaces properly with the state vector transformation module (SVX) developed previously. Correct command sentence sequences are generated by the OMV and and SVX system, and these command sequences can be used to drive the flat floor simulation system at MSFC.

  6. CNV-seq, a new method to detect copy number variation using high-throughput sequencing.

    PubMed

    Xie, Chao; Tammi, Martti T

    2009-03-06

    DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.

  7. Continuous- and discrete-time stimulus sequences for high stimulus rate paradigm in evoked potential studies.

    PubMed

    Wang, Tao; Huang, Jiang-hua; Lin, Lin; Zhan, Chang'an A

    2013-01-01

    To obtain reliable transient auditory evoked potentials (AEPs) from EEGs recorded using high stimulus rate (HSR) paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG) recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.

  8. Detecting exact breakpoints of deletions with diversity in hepatitis B viral genomic DNA from next-generation sequencing data.

    PubMed

    Cheng, Ji-Hong; Liu, Wen-Chun; Chang, Ting-Tsung; Hsieh, Sun-Yuan; Tseng, Vincent S

    2017-10-01

    Many studies have suggested that deletions of Hepatitis B Viral (HBV) are associated with the development of progressive liver diseases, even ultimately resulting in hepatocellular carcinoma (HCC). Among the methods for detecting deletions from next-generation sequencing (NGS) data, few methods considered the characteristics of virus, such as high evolution rates and high divergence among the different HBV genomes. Sequencing high divergence HBV genome sequences using the NGS technology outputs millions of reads. Thus, detecting exact breakpoints of deletions from these big and complex data incurs very high computational cost. We proposed a novel analytical method named VirDelect (Virus Deletion Detect), which uses split read alignment base to detect exact breakpoint and diversity variable to consider high divergence in single-end reads data, such that the computational cost can be reduced without losing accuracy. We use four simulated reads datasets and two real pair-end reads datasets of HBV genome sequence to verify VirDelect accuracy by score functions. The experimental results show that VirDelect outperforms the state-of-the-art method Pindel in terms of accuracy score for all simulated datasets and VirDelect had only two base errors even in real datasets. VirDelect is also shown to deliver high accuracy in analyzing the single-end read data as well as pair-end data. VirDelect can serve as an effective and efficient bioinformatics tool for physiologists with high accuracy and efficient performance and applicable to further analysis with characteristics similar to HBV on genome length and high divergence. The software program of VirDelect can be downloaded at https://sourceforge.net/projects/virdelect/. Copyright © 2017. Published by Elsevier Inc.

  9. The development and optimisation of 3D black-blood R2* mapping of the carotid artery wall.

    PubMed

    Yuan, Jianmin; Graves, Martin J; Patterson, Andrew J; Priest, Andrew N; Ruetten, Pascal P R; Usman, Ammara; Gillard, Jonathan H

    2017-12-01

    To develop and optimise a 3D black-blood R 2 * mapping sequence for imaging the carotid artery wall, using optimal blood suppression and k-space view ordering. Two different blood suppression preparation methods were used; Delay Alternating with Nutation for Tailored Excitation (DANTE) and improved Motion Sensitive Driven Equilibrium (iMSDE) were each combined with a three-dimensional (3D) multi-echo Fast Spoiled GRadient echo (ME-FSPGR) readout. Three different k-space view-order designs: Radial Fan-beam Encoding Ordering (RFEO), Distance-Determined Encoding Ordering (DDEO) and Centric Phase Encoding Order (CPEO) were investigated. The sequences were evaluated through Bloch simulation and in a cohort of twenty volunteers. The vessel wall Signal-to-Noise Ratio (SNR), Contrast-to-Noise Ratio (CNR) and R 2 *, and the sternocleidomastoid muscle R 2 * were measured and compared. Different numbers of acquisitions-per-shot (APS) were evaluated to further optimise the effectiveness of blood suppression. All sequences resulted in comparable R 2 * measurements to a conventional, i.e. non-blood suppressed sequence in the sternocleidomastoid muscle of the volunteers. Both Bloch simulations and volunteer data showed that DANTE has a higher signal intensity and results in a higher image SNR than iMSDE. Blood suppression efficiency was not significantly different when using different k-space view orders. Smaller APS achieved better blood suppression. The use of blood-suppression preparation methods does not affect the measurement of R 2 *. DANTE prepared ME-FSPGR sequence with a small number of acquisitions-per-shot can provide high quality black-blood R 2 * measurements of the carotid vessel wall. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Historian: accurate reconstruction of ancestral sequences and evolutionary rates.

    PubMed

    Holmes, Ian H

    2017-04-15

    Reconstruction of ancestral sequence histories, and estimation of parameters like indel rates, are improved by using explicit evolutionary models and summing over uncertain alignments. The previous best tool for this purpose (according to simulation benchmarks) was ProtPal, but this tool was too slow for practical use. Historian combines an efficient reimplementation of the ProtPal algorithm with performance-improving heuristics from other alignment tools. Simulation results on fidelity of rate estimation via ancestral reconstruction, along with evaluations on the structurally informed alignment dataset BAliBase 3.0, recommend Historian over other alignment tools for evolutionary applications. Historian is available at https://github.com/evoldoers/historian under the Creative Commons Attribution 3.0 US license. ihholmes+historian@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. Simulations Using Random-Generated DNA and RNA Sequences

    ERIC Educational Resources Information Center

    Bryce, C. F. A.

    1977-01-01

    Using a very simple computer program written in BASIC, a very large number of random-generated DNA or RNA sequences are obtained. Students use these sequences to predict complementary sequences and translational products, evaluate base compositions, determine frequencies of particular triplet codons, and suggest possible secondary structures.…

  12. Sequence- and Temperature-Dependent Properties of Unfolded and Disordered Proteins from Atomistic Simulations.

    PubMed

    Zerze, Gül H; Best, Robert B; Mittal, Jeetain

    2015-11-19

    We use all-atom molecular simulation with explicit solvent to study the properties of selected intrinsically disordered proteins and unfolded states of foldable proteins, which include chain dimensions and shape, secondary structure propensity, solvent accessible surface area, and contact formation. We find that the qualitative scaling behavior of the chains matches expectations from theory under ambient conditions. In particular, unfolded globular proteins tend to be more collapsed under the same conditions than charged disordered sequences of the same length. However, inclusion of explicit solvent in addition naturally captures temperature-dependent solvation effects, which results in an initial collapse of the chains as temperature is increased, in qualitative agreement with experiment. There is a universal origin to the collapse, revealed in the change of hydration of individual residues as a function of temperature: namely, that the initial collapse is driven by unfavorable solvation free energy of individual residues, which in turn has a strong temperature dependence. We also observe that in unfolded globular proteins, increased temperature also initially favors formation of native-like (rather than non-native-like) structure. Our results help to establish how sequence encodes the degree of intrinsic disorder or order as well as its response to changes in environmental conditions.

  13. An improved immune algorithm for optimizing the pulse width modulation control sequence of inverters

    NASA Astrophysics Data System (ADS)

    Sheng, L.; Qian, S. Q.; Ye, Y. Q.; Wu, Y. H.

    2017-09-01

    In this article, an improved immune algorithm (IIA), based on the fundamental principles of the biological immune system, is proposed for optimizing the pulse width modulation (PWM) control sequence of a single-phase full-bridge inverter. The IIA takes advantage of the receptor editing and adaptive mutation mechanisms of the immune system to develop two operations that enhance the population diversity and convergence of the proposed algorithm. To verify the effectiveness and examine the performance of the IIA, 17 cases are considered, including fixed and disturbed resistances. Simulation results show that the IIA is able to obtain an effective PWM control sequence. Furthermore, when compared with existing immune algorithms (IAs), genetic algorithms (GAs), a non-traditional GA, simplified simulated annealing, and a generalized Hopfield neural network method, the IIA can achieve small total harmonic distortion (THD) and large magnitude. Meanwhile, a non-parametric test indicates that the IIA is significantly better than most comparison algorithms. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2016.1250894.

  14. Visualization of simulated urban spaces: inferring parameterized generation of streets, parcels, and aerial imagery.

    PubMed

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul

    2009-01-01

    Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.

  15. Physical Scaffolding Accelerates the Evolution of Robot Behavior.

    PubMed

    Buckingham, David; Bongard, Josh

    2017-01-01

    In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.

  16. Clustering evolving proteins into homologous families.

    PubMed

    Chan, Cheong Xin; Mahbob, Maisarah; Ragan, Mark A

    2013-04-08

    Clustering sequences into groups of putative homologs (families) is a critical first step in many areas of comparative biology and bioinformatics. The performance of clustering approaches in delineating biologically meaningful families depends strongly on characteristics of the data, including content bias and degree of divergence. New, highly scalable methods have recently been introduced to cluster the very large datasets being generated by next-generation sequencing technologies. However, there has been little systematic investigation of how characteristics of the data impact the performance of these approaches. Using clusters from a manually curated dataset as reference, we examined the performance of a widely used graph-based Markov clustering algorithm (MCL) and a greedy heuristic approach (UCLUST) in delineating protein families coded by three sets of bacterial genomes of different G+C content. Both MCL and UCLUST generated clusters that are comparable to the reference sets at specific parameter settings, although UCLUST tends to under-cluster compositionally biased sequences (G+C content 33% and 66%). Using simulated data, we sought to assess the individual effects of sequence divergence, rate heterogeneity, and underlying G+C content. Performance decreased with increasing sequence divergence, decreasing among-site rate variation, and increasing G+C bias. Two MCL-based methods recovered the simulated families more accurately than did UCLUST. MCL using local alignment distances is more robust across the investigated range of sequence features than are greedy heuristics using distances based on global alignment. Our results demonstrate that sequence divergence, rate heterogeneity and content bias can individually and in combination affect the accuracy with which MCL and UCLUST can recover homologous protein families. For application to data that are more divergent, and exhibit higher among-site rate variation and/or content bias, MCL may often be the better choice, especially if computational resources are not limiting.

  17. Problems with the random number generator RANF implemented on the CDC cyber 205

    NASA Astrophysics Data System (ADS)

    Kalle, Claus; Wansleben, Stephan

    1984-10-01

    We show that using RANF may lead to wrong results when lattice models are simulated by Monte Carlo methods. We present a shift-register sequence random number generator which generates two random numbers per cycle on a two pipe CDC Cyber 205.

  18. An experimental phylogeny to benchmark ancestral sequence reconstruction

    PubMed Central

    Randall, Ryan N.; Radford, Caelan E.; Roof, Kelsey A.; Natarajan, Divya K.; Gaucher, Eric A.

    2016-01-01

    Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as ‘modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687

  19. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    PubMed

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal models. Overall results suggest that adaptation and anticipation mechanisms both play an important role during sensorimotor synchronization with tempo-changing sequences. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Development and validation of a D-loop mtDNA SNP assay for the screening of specimens in forensic casework.

    PubMed

    Chemale, Gustavo; Paneto, Greiciane Gaburro; Menezes, Meiga Aurea Mendes; de Freitas, Jorge Marcelo; Jacques, Guilherme Silveira; Cicarelli, Regina Maria Barretto; Fagundes, Paulo Roberto

    2013-05-01

    Mitochondrial DNA (mtDNA) analysis is usually a last resort in routine forensic DNA casework. However, it has become a powerful tool for the analysis of highly degraded samples or samples containing too little or no nuclear DNA, such as old bones and hair shafts. The gold standard methodology still constitutes the direct sequencing of polymerase chain reaction (PCR) products or cloned amplicons from the HVS-1 and HVS-2 (hypervariable segment) control region segments. Identifications using mtDNA are time consuming, expensive and can be very complex, depending on the amount and nature of the material being tested. The main goal of this work is to develop a less labour-intensive and less expensive screening method for mtDNA analysis, in order to aid in the exclusion of non-matching samples and as a presumptive test prior to final confirmatory DNA sequencing. We have selected 14 highly discriminatory single nucleotide polymorphisms (SNPs) based on simulations performed by Salas and Amigo (2010) to be typed using SNaPShot(TM) (Applied Biosystems, Foster City, CA, USA). The assay was validated by typing more than 100 HVS-1/HVS-2 sequenced samples. No differences were observed between the SNP typing and DNA sequencing when results were compared, with the exception of allelic dropouts observed in a few haplotypes. Haplotype diversity simulations were performed using 172 mtDNA sequences representative of the Brazilian population and a score of 0.9794 was obtained when the 14 SNPs were used, showing that the theoretical prediction approach for the selection of highly discriminatory SNPs suggested by Salas and Amigo (2010) was confirmed in the population studied. As the main goal of the work is to develop a screening assay to skip the sequencing of all samples in a particular case, a pair-wise comparison of the sequences was done using the selected SNPs. When both HVS-1/HVS-2 SNPs were used for simulations, at least two differences were observed in 93.2% of the comparisons performed. The assay was validated with casework samples. Results show that the method is straightforward and can be used for exclusionary purposes, saving time and laboratory resources. The assay confirms the theoretic prediction suggested by Salas and Amigo (2010). All forensic advantages, such as high sensitivity and power of discrimination, as also the disadvantages, such as the occurrence of allele dropouts, are discussed throughout the article. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. RY-Coding and Non-Homogeneous Models Can Ameliorate the Maximum-Likelihood Inferences From Nucleotide Sequence Data with Parallel Compositional Heterogeneity.

    PubMed

    Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo

    2012-01-01

    In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.

  2. A sequence-dependent rigid-base model of DNA

    NASA Astrophysics Data System (ADS)

    Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.

    2013-02-01

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.

  3. A sequence-dependent rigid-base model of DNA.

    PubMed

    Gonzalez, O; Petkevičiūtė, D; Maddocks, J H

    2013-02-07

    A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.

  4. Molecular dynamics studies on the DNA-binding process of ERG.

    PubMed

    Beuerle, Matthias G; Dufton, Neil P; Randi, Anna M; Gould, Ian R

    2016-11-15

    The ETS family of transcription factors regulate gene targets by binding to a core GGAA DNA-sequence. The ETS factor ERG is required for homeostasis and lineage-specific functions in endothelial cells, some subset of haemopoietic cells and chondrocytes; its ectopic expression is linked to oncogenesis in multiple tissues. To date details of the DNA-binding process of ERG including DNA-sequence recognition outside the core GGAA-sequence are largely unknown. We combined available structural and experimental data to perform molecular dynamics simulations to study the DNA-binding process of ERG. In particular we were able to reproduce the ERG DNA-complex with a DNA-binding simulation starting in an unbound configuration with a final root-mean-square-deviation (RMSD) of 2.1 Å to the core ETS domain DNA-complex crystal structure. This allowed us to elucidate the relevance of amino acids involved in the formation of the ERG DNA-complex and to identify Arg385 as a novel key residue in the DNA-binding process. Moreover we were able to show that water-mediated hydrogen bonds are present between ERG and DNA in our simulations and that those interactions have the potential to achieve sequence recognition outside the GGAA core DNA-sequence. The methodology employed in this study shows the promising capabilities of modern molecular dynamics simulations in the field of protein DNA-interactions.

  5. Deciphering mRNA Sequence Determinants of Protein Production Rate

    NASA Astrophysics Data System (ADS)

    Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen

    2018-03-01

    One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.

  6. DNA Shape Dominates Sequence Affinity in Nucleosome Formation

    NASA Astrophysics Data System (ADS)

    Freeman, Gordon S.; Lequieu, Joshua P.; Hinckley, Daniel M.; Whitmer, Jonathan K.; de Pablo, Juan J.

    2014-10-01

    Nucleosomes provide the basic unit of compaction in eukaryotic genomes, and the mechanisms that dictate their position at specific locations along a DNA sequence are of central importance to genetics. In this Letter, we employ molecular models of DNA and proteins to elucidate various aspects of nucleosome positioning. In particular, we show how DNA's histone affinity is encoded in its sequence-dependent shape, including subtle deviations from the ideal straight B-DNA form and local variations of minor groove width. By relying on high-precision simulations of the free energy of nucleosome complexes, we also demonstrate that, depending on DNA's intrinsic curvature, histone binding can be dominated by bending interactions or electrostatic interactions. More generally, the results presented here explain how sequence, manifested as the shape of the DNA molecule, dominates molecular recognition in the problem of nucleosome positioning.

  7. A Novel Partial Sequence Alignment Tool for Finding Large Deletions

    PubMed Central

    Aruk, Taner; Ustek, Duran; Kursun, Olcay

    2012-01-01

    Finding large deletions in genome sequences has become increasingly more useful in bioinformatics, such as in clinical research and diagnosis. Although there are a number of publically available next generation sequencing mapping and sequence alignment programs, these software packages do not correctly align fragments containing deletions larger than one kb. We present a fast alignment software package, BinaryPartialAlign, that can be used by wet lab scientists to find long structural variations in their experiments. For BinaryPartialAlign, we make use of the Smith-Waterman (SW) algorithm with a binary-search-based approach for alignment with large gaps that we called partial alignment. BinaryPartialAlign implementation is compared with other straight-forward applications of SW. Simulation results on mtDNA fragments demonstrate the effectiveness (runtime and accuracy) of the proposed method. PMID:22566777

  8. Auto-tracking system for human lumbar motion analysis.

    PubMed

    Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong

    2011-01-01

    Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.

  9. Dynamic Modelling of Fault Slip Induced by Stress Waves due to Stope Production Blasts

    NASA Astrophysics Data System (ADS)

    Sainoki, Atsushi; Mitri, Hani S.

    2016-01-01

    Seismic events can take place due to the interaction of stress waves induced by stope production blasts with faults located in close proximity to stopes. The occurrence of such seismic events needs to be controlled to ensure the safety of the mine operators and the underground mine workings. This paper presents the results of a dynamic numerical modelling study of fault slip induced by stress waves resulting from stope production blasts. First, the calibration of a numerical model having a single blast hole is performed using a charge weight scaling law to determine blast pressure and damping coefficient of the rockmass. Subsequently, a numerical model of a typical Canadian metal mine encompassing a fault parallel to a tabular ore deposit is constructed, and the simulation of stope extraction sequence is carried out with static analyses until the fault exhibits slip burst conditions. At that point, the dynamic analysis begins by applying the calibrated blast pressure to the stope wall in the form of velocities generated by the blast holes. It is shown from the results obtained from the dynamic analysis that the stress waves reflected on the fault create a drop of normal stresses acting on the fault, which produces a reduction in shear stresses while resulting in fault slip. The influence of blast sequences on the behaviour of the fault is also examined assuming several types of blast sequences. Comparison of the blast sequence simulation results indicates that performing simultaneous blasts symmetrically induces the same level of seismic events as separate blasts, although seismic energy is more rapidly released when blasts are performed symmetrically. On the other hand when nine blast holes are blasted simultaneously, a large seismic event is induced, compared to the other two blasts. It is concluded that the separate blasts might be employed under the adopted geological conditions. The developed methodology and procedure to arrive at an ideal blast sequence can be applied to other mines where faults are found in the vicinity of stopes.

  10. Simulating Next-Generation Sequencing Datasets from Empirical Mutation and Sequencing Models

    PubMed Central

    Stephens, Zachary D.; Hudson, Matthew E.; Mainzer, Liudmila S.; Taschuk, Morgan; Weber, Matthew R.; Iyer, Ravishankar K.

    2016-01-01

    An obstacle to validating and benchmarking methods for genome analysis is that there are few reference datasets available for which the “ground truth” about the mutational landscape of the sample genome is known and fully validated. Additionally, the free and public availability of real human genome datasets is incompatible with the preservation of donor privacy. In order to better analyze and understand genomic data, we need test datasets that model all variants, reflecting known biology as well as sequencing artifacts. Read simulators can fulfill this requirement, but are often criticized for limited resemblance to true data and overall inflexibility. We present NEAT (NExt-generation sequencing Analysis Toolkit), a set of tools that not only includes an easy-to-use read simulator, but also scripts to facilitate variant comparison and tool evaluation. NEAT has a wide variety of tunable parameters which can be set manually on the default model or parameterized using real datasets. The software is freely available at github.com/zstephens/neat-genreads. PMID:27893777

  11. Track and mode controller (TMC): a software executive for a high-altitude pointing and tracking experiment

    NASA Astrophysics Data System (ADS)

    Michnovicz, Michael R.

    1997-06-01

    A real-time executive has been implemented to control a high altitude pointing and tracking experiment. The track and mode controller (TMC) implements a table driven design, in which the track mode logic for a tracking mission is defined within a state transition diagram (STD). THe STD is implemented as a state transition table in the TMC software. Status Events trigger the state transitions in the STD. Each state, as it is entered, causes a number of processes to be activated within the system. As these processes propagate through the system, the status of key processes are monitored by the TMC, allowing further transitions within the STD. This architecture is implemented in real-time, using the vxWorks operating system. VxWorks message queues allow communication of status events from the Event Monitor task to the STD task. Process commands are propagated to the rest of the system processors by means of the SCRAMNet shared memory network. The system mode logic contained in the STD will autonomously sequence in acquisition, tracking and pointing system through an entire engagement sequence, starting with target detection and ending with aimpoint maintenance. Simulation results and lab test results will be presented to verify the mode controller. In addition to implementing the system mode logic with the STD, the TMC can process prerecorded time sequences of commands required during startup operations. It can also process single commands from the system operator. In this paper, the author presents (1) an overview, in which he describes the TMC architecture, the relationship of an end-to-end simulation to the flight software and the laboratory testing environment, (2) implementation details, including information on the vxWorks message queues and the SCRAMNet shared memory network, (3) simulation results and lab test results which verify the mode controller, and (4) plans for the future, specifically as to how this executive will expedite transition to a fully functional system.

  12. SU-G-IeP1-08: MR Geometric Distortion Dependency On Imaging Sequence, Acquisition Orientation and Receiver Bandwidth of a Dedicated 1.5T MR-Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, M; Yuan, J; Wong, O

    Purpose: To investigate the 3D geometric distortion of four potential MR sequences for radiotheraptic applications, and its dependency on sequence-type, acquisition-orientation and receiver-bandwidth from a dedicated 1.5T 700mm-wide bore MR-simulator (Magnetom-Aera, Sienmens Healthcare, Erlangen, Germany), using a large customized geometric accuracy phantom. Methods: This work studied 3D gradient-echo (VIBE) and spin-echo (SPACE) sequences for anatomical imaging; a specific ultra-short-TE sequence (PETRA) potentially for bone imaging and MR-based dosimetry; and a motion-insensitive sequence (BLADE) for dynamic applications like 4D-MRI. Integrated geometric-correction was employed, three orthogonal acquisition-orientations and up to three receiver-bandwidths were used, yielding 27 acquisitions for testing (Table 1a).A customizedmore » geometric accuracy phantom (polyurethane, MR/CT invisible, W×L×H:55×55×32.5cm3) was constructed and filled with 3892 spherical markers (6mm diameter, MR/CT visible) arranged on a 25mm-interval 3D isotropic-grid (Fig.1). The marker positions in MR images were quantitatively calculated and compared against those in the CT-reference using customized MatLab scripts. Results: The average distortion within various diameter-of-spherical-volumes (DSVs) and the usable DSVs under various distortion limits were measured (Tables 1b-c). It was observed that distortions fluctuated when sequence-type, acquisition-orientation or receiver-bandwidth changed (e.g. within 300mm-DSV, the lowest/highest average distortions of VIBE were 0.40mm/0.59mm, a 47.5% difference). According to AAPM-TG66 (<1mm distortion, left-most column of Table 1c), PETRA (Largest-DSV:253.9mm) has the potential on brain treatment, while BLADE (Largest-DSV:207.2mm) may need improvement for thoracic/abdominal applications. The results of VIBE (Largest-DSVs:294.3mm, the best among tested acquisitions) and SPACE (Largest-DSVs:267.7mm) suggests their potentials on head and neck applications. These Largest-DSVs were attained on different acquisition-orientations and receiver-bandwidths. Conclusion: Geometric distortion was shown to be dependent on sequence-type, acquisition-orientation and receiver-bandwidth. In the experiment, no configuration in any one of these factors could consistently reduce distortion while the others were varying. The distortion analysis result is a valuable guideline for sequence selection and optimization for MR-aided radiotherapy applications.« less

  13. Comparison of corrosion scales in full and partially replaced lead service lines after changes in water quality

    EPA Science Inventory

    Preliminary results from scales formed 38 weeks following the LSL replacement simulations revealed differences in scale formations amongst varying water qualities and pipe sequence. Rigs fed with dechlorinated tap water show distinct pH gradients between the galvanic and the back...

  14. Simulation of Cavern Formation and Karst Development Using Salt

    ERIC Educational Resources Information Center

    Kent, Douglas C.; Ross, Alex R.

    1975-01-01

    A salt model was developed as a teaching tool to demonstrate the development of caverns and karst topography. Salt slabs are placed in a watertight box to represent fractured limestone. Erosion resulting from water flow can be photographed in time-lapse sequence or demonstrated in the laboratory. (Author/CP)

  15. Validation of a Video-based Game-Understanding Test Procedure in Badminton.

    ERIC Educational Resources Information Center

    Blomqvist, Minna T.; Luhtanen, Pekka; Laakso, Lauri; Keskinen, Esko

    2000-01-01

    Reports the development and validation of video-based game-understanding tests in badminton for elementary and secondary students. The tests included different sequences that simulated actual game situations. Players had to solve tactical problems by selecting appropriate solutions and arguments for their decisions. Results suggest that the test…

  16. Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.

  17. Introducing folding stability into the score function for computational design of RNA-binding peptides boosts the probability of success.

    PubMed

    Xiao, Xingqing; Agris, Paul F; Hall, Carol K

    2016-05-01

    A computational strategy that integrates our peptide search algorithm with atomistic molecular dynamics simulation was used to design rational peptide drugs that recognize and bind to the anticodon stem and loop domain (ASL(Lys3)) of human tRNAUUULys3 for the purpose of interrupting HIV replication. The score function of the search algorithm was improved by adding a peptide stability term weighted by an adjustable factor λ to the peptide binding free energy. The five best peptide sequences associated with five different values of λ were determined using the search algorithm and then input in atomistic simulations to examine the stability of the peptides' folded conformations and their ability to bind to ASL(Lys3). Simulation results demonstrated that setting an intermediate value of λ achieves a good balance between optimizing the peptide's binding ability and stabilizing its folded conformation during the sequence evolution process, and hence leads to optimal binding to the target ASL(Lys3). Thus, addition of a peptide stability term significantly improves the success rate for our peptide design search. © 2016 Wiley Periodicals, Inc.

  18. A robust measure of HIV-1 population turnover within chronically infected individuals.

    PubMed

    Achaz, G; Palmer, S; Kearney, M; Maldarelli, F; Mellors, J W; Coffin, J M; Wakeley, J

    2004-10-01

    A simple nonparameteric test for population structure was applied to temporally spaced samples of HIV-1 sequences from the gag-pol region within two chronically infected individuals. The results show that temporal structure can be detected for samples separated by about 22 months or more. The performance of the method, which was originally proposed to detect geographic structure, was tested for temporally spaced samples using neutral coalescent simulations. Simulations showed that the method is robust to variation in samples sizes and mutation rates, to the presence/absence of recombination, and that the power to detect temporal structure is high. By comparing levels of temporal structure in simulations to the levels observed in real data, we estimate the effective intra-individual population size of HIV-1 to be between 10(3) and 10(4) viruses, which is in agreement with some previous estimates. Using this estimate and a simple measure of sequence diversity, we estimate an effective neutral mutation rate of about 5 x 10(-6) per site per generation in the gag-pol region. The definition and interpretation of estimates of such "effective" population parameters are discussed.

  19. Analysis of dispatching rules in a stochastic dynamic job shop manufacturing system with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Sharma, Pankaj; Jain, Ajai

    2014-12-01

    Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.

  20. Draft genome sequence of two Shingopyxis sp. strains H107 and H115 isolated from a chloraminated drinking water distriburion system simulator

    EPA Pesticide Factsheets

    Draft genome sequence of two Shingopyxis sp. strains H107 and H115 isolated from a chloraminated drinking water distriburion system simulatorThis dataset is associated with the following publication:Gomez-Alvarez, V., S. Pfaller , and R. Revetta. Draft Genome of Two Sphingopyxis sp. Strains, Dominant Members of the Bacterial Community Associated with a Drinking Water Distribution System Simulator. Genome Announcements. American Society for Microbiology, Washington, DC, USA, 4(2): e00183-16, (2016).

  1. Evaluating multiplexed next-generation sequencing as a method in palynology for mixed pollen samples.

    PubMed

    Keller, A; Danner, N; Grimmer, G; Ankenbrand, M; von der Ohe, K; von der Ohe, W; Rost, S; Härtel, S; Steffan-Dewenter, I

    2015-03-01

    The identification of pollen plays an important role in ecology, palaeo-climatology, honey quality control and other areas. Currently, expert knowledge and reference collections are essential to identify pollen origin through light microscopy. Pollen identification through molecular sequencing and DNA barcoding has been proposed as an alternative approach, but the assessment of mixed pollen samples originating from multiple plant species is still a tedious and error-prone task. Next-generation sequencing has been proposed to avoid this hindrance. In this study we assessed mixed pollen probes through next-generation sequencing of amplicons from the highly variable, species-specific internal transcribed spacer 2 region of nuclear ribosomal DNA. Further, we developed a bioinformatic workflow to analyse these high-throughput data with a newly created reference database. To evaluate the feasibility, we compared results from classical identification based on light microscopy from the same samples with our sequencing results. We assessed in total 16 mixed pollen samples, 14 originated from honeybee colonies and two from solitary bee nests. The sequencing technique resulted in higher taxon richness (deeper assignments and more identified taxa) compared to light microscopy. Abundance estimations from sequencing data were significantly correlated with counted abundances through light microscopy. Simulation analyses of taxon specificity and sensitivity indicate that 96% of taxa present in the database are correctly identifiable at the genus level and 70% at the species level. Next-generation sequencing thus presents a useful and efficient workflow to identify pollen at the genus and species level without requiring specialised palynological expert knowledge. © 2014 German Botanical Society and The Royal Botanical Society of the Netherlands.

  2. Laser beam propagation through turbulence and adaptive optics for beam delivery improvement

    NASA Astrophysics Data System (ADS)

    Nicolas, Stephane

    2015-10-01

    We report results from numerical simulations of laser beam propagation through atmospheric turbulence. In particular, we study the statistical variations of the fractional beam energy hitting inside an optical aperture placed at several kilometer distance. The simulations are performed for different turbulence conditions and engagement ranges, with and without the use of turbulence mitigation. Turbulence mitigation is simulated with phase conjugation. The energy fluctuations are deduced from time sequence realizations. It is shown that turbulence mitigation leads to an increase of the mean energy inside the aperture and decrease of the fluctuations even in strong turbulence conditions and long distance engagement. As an example, the results are applied to a high energy laser countermeasure system, where we determine the probability that a single laser pulse, or one of the pulses in a sequence, will provide a lethal energy inside the target aperture. Again, turbulence mitigation contributes to increase the performance of the system at long-distance and for strong turbulence conditions in terms of kill probability. We also discuss a specific case where turbulence contributes to increase the pulse energy within the target aperture. The present analysis can be used to evaluate the performance of a variety of systems, such as directed countermeasures, laser communication, and laser weapons.

  3. Intramural activation and repolarization sequences in canine ventricles. Experimental and simulation studies.

    PubMed

    Taccardi, Bruno; Punske, Bonnie B; Sachse, Frank; Tricoche, Xavier; Colli-Franzone, Piero; Pavarino, Luca F; Zabawa, Christine

    2005-10-01

    There are no published data showing the three-dimensional sequence of repolarization and the associated potential fields in the ventricles. Knowledge of the sequence of repolarization has medical relevance because high spatial dispersion of recovery times and action potential durations favors cardiac arrhythmias. In this study we describe measured and simulated 3-D excitation and recovery sequences and activation-recovery intervals (ARIs) (measured) or action potential durations (APDs) (simulated) in the ventricular walls. We recorded from 600 to 1400 unipolar electrograms from canine ventricular walls during atrial and ventricular pacing at 350-450 ms cycle length. Measured excitation and recovery times and ARIs were displayed as 2-D maps in transmural planes or 3-D maps in the volume explored, using specially developed software. Excitation and recovery sequences and APD distributions were also simulated in parallelepipedal slabs using anisotropic monodomain or bidomain models based on the Lou-Rudy version 1 model with homogeneous membrane properties. Simulations showed that in the presence of homogeneous membrane properties, the sequence of repolarization was similar but not identical to the excitation sequence. In a transmural plane perpendicular to epicardial fiber direction, both activation and recovery pathways starting from an epicardial pacing site returned toward the epicardium at a few cm distance from the pacing site. However, APDs were not constant, but had a dispersion of approximately 14 ms in the simulated domain. The maximum APD value was near the pacing site and two minima appeared along a line perpendicular to fiber directions, passing through the pacing site. Electrical measurements in dog ventricles showed that, for short cycle lengths, both excitation and recovery pathways, starting from an epicardial pacing site, returned toward the epicardium. For slower pacing rates, pathways of recovery departed from the pathway of excitation. Highest ARI values were observed near the pacing site in part of the experiments. In addition, maps of activation-recovery intervals showed mid-myocardial clusters with activation-recovery intervals that were slightly longer than ARIs closer to the epi- or endocardium, suggesting the presence of M cells in those areas. Transmural dispersion of measured ARIs was on the order of 20-25 ms. Potential distributions during recovery were less affected by myocardial anisotropy than were excitation potentials.

  4. Multiple Access Interference Reduction Using Received Response Code Sequence for DS-CDMA UWB System

    NASA Astrophysics Data System (ADS)

    Toh, Keat Beng; Tachikawa, Shin'ichi

    This paper proposes a combination of novel Received Response (RR) sequence at the transmitter and a Matched Filter-RAKE (MF-RAKE) combining scheme receiver system for the Direct Sequence-Code Division Multiple Access Ultra Wideband (DS-CDMA UWB) multipath channel model. This paper also demonstrates the effectiveness of the RR sequence in Multiple Access Interference (MAI) reduction for the DS-CDMA UWB system. It suggests that by using conventional binary code sequence such as the M sequence or the Gold sequence, there is a possibility of generating extra MAI in the UWB system. Therefore, it is quite difficult to collect the energy efficiently although the RAKE reception method is applied at the receiver. The main purpose of the proposed system is to overcome the performance degradation for UWB transmission due to the occurrence of MAI during multiple accessing in the DS-CDMA UWB system. The proposed system improves the system performance by improving the RAKE reception performance using the RR sequence which can reduce the MAI effect significantly. Simulation results verify that significant improvement can be obtained by the proposed system in the UWB multipath channel models.

  5. Molecular dynamics simulations and docking enable to explore the biophysical factors controlling the yields of engineered nanobodies.

    PubMed

    Soler, Miguel A; de Marco, Ario; Fortuna, Sara

    2016-10-10

    Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.

  6. Molecular dynamics simulations and docking enable to explore the biophysical factors controlling the yields of engineered nanobodies

    NASA Astrophysics Data System (ADS)

    Soler, Miguel A.; De Marco, Ario; Fortuna, Sara

    2016-10-01

    Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.

  7. Damage Progression in Bolted Composites

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos C.; Gotsis, Pascal K.

    1998-01-01

    Structural durability, damage tolerance, and progressive fracture characteristics of bolted graphite/epoxy composite laminates are evaluated via computational simulation. Constituent material properties and stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for bolted composites. Single and double bolted composite specimens with various widths and bolt spacings are evaluated. The effect of bolt spacing is investigated with regard to the structural durability of a bolted joint. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulations. Results show the damage progression sequence and structural fracture resistance during different degradation stages. A procedure is outlined for the use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of experimental results with insight for design decisions.

  8. Damage Progression in Bolted Composites

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Chamis, Christos; Gotsis, Pascal K.

    1998-01-01

    Structural durability,damage tolerance,and progressive fracture characteristics of bolted graphite/epoxy composite laminates are evaluated via computational simulation. Constituent material properties and stress and strain limits are scaled up to the structure level to evaluate the overall damage and fracture propagation for bolted composites. Single and double bolted composite specimens with various widths and bolt spacings are evaluated. The effect of bolt spacing is investigated with regard to the structural durability of a bolted joint. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulations. Results show the damage progression sequence and structural fracture resistance during different degradation stages. A procedure is outlined for the use of computational simulation data in the assessment of damage tolerance, determination of sensitive parameters affecting fracture, and interpretation of experimental results with insight for design decisions.

  9. Single-shot T2 mapping using overlapping-echo detachment planar imaging and a deep convolutional neural network.

    PubMed

    Cai, Congbo; Wang, Chao; Zeng, Yiqing; Cai, Shuhui; Liang, Dong; Wu, Yawen; Chen, Zhong; Ding, Xinghao; Zhong, Jianhui

    2018-04-24

    An end-to-end deep convolutional neural network (CNN) based on deep residual network (ResNet) was proposed to efficiently reconstruct reliable T 2 mapping from single-shot overlapping-echo detachment (OLED) planar imaging. The training dataset was obtained from simulations that were carried out on SPROM (Simulation with PRoduct Operator Matrix) software developed by our group. The relationship between the original OLED image containing two echo signals and the corresponding T 2 mapping was learned by ResNet training. After the ResNet was trained, it was applied to reconstruct the T 2 mapping from simulation and in vivo human brain data. Although the ResNet was trained entirely on simulated data, the trained network was generalized well to real human brain data. The results from simulation and in vivo human brain experiments show that the proposed method significantly outperforms the echo-detachment-based method. Reliable T 2 mapping with higher accuracy is achieved within 30 ms after the network has been trained, while the echo-detachment-based OLED reconstruction method took approximately 2 min. The proposed method will facilitate real-time dynamic and quantitative MR imaging via OLED sequence, and deep convolutional neural network has the potential to reconstruct maps from complex MRI sequences efficiently. © 2018 International Society for Magnetic Resonance in Medicine.

  10. Inference of Markovian properties of molecular sequences from NGS data and applications to comparative genomics.

    PubMed

    Ren, Jie; Song, Kai; Deng, Minghua; Reinert, Gesine; Cannon, Charles H; Sun, Fengzhu

    2016-04-01

    Next-generation sequencing (NGS) technologies generate large amounts of short read data for many different organisms. The fact that NGS reads are generally short makes it challenging to assemble the reads and reconstruct the original genome sequence. For clustering genomes using such NGS data, word-count based alignment-free sequence comparison is a promising approach, but for this approach, the underlying expected word counts are essential.A plausible model for this underlying distribution of word counts is given through modeling the DNA sequence as a Markov chain (MC). For single long sequences, efficient statistics are available to estimate the order of MCs and the transition probability matrix for the sequences. As NGS data do not provide a single long sequence, inference methods on Markovian properties of sequences based on single long sequences cannot be directly used for NGS short read data. Here we derive a normal approximation for such word counts. We also show that the traditional Chi-square statistic has an approximate gamma distribution ,: using the Lander-Waterman model for physical mapping. We propose several methods to estimate the order of the MC based on NGS reads and evaluate those using simulations. We illustrate the applications of our results by clustering genomic sequences of several vertebrate and tree species based on NGS reads using alignment-free sequence dissimilarity measures. We find that the estimated order of the MC has a considerable effect on the clustering results ,: and that the clustering results that use a N: MC of the estimated order give a plausible clustering of the species. Our implementation of the statistics developed here is available as R package 'NGS.MC' at http://www-rcf.usc.edu/∼fsun/Programs/NGS-MC/NGS-MC.html fsun@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Improving membrane protein expression by optimizing integration efficiency

    PubMed Central

    2017-01-01

    The heterologous overexpression of integral membrane proteins in Escherichia coli often yields insufficient quantities of purifiable protein for applications of interest. The current study leverages a recently demonstrated link between co-translational membrane integration efficiency and protein expression levels to predict protein sequence modifications that improve expression. Membrane integration efficiencies, obtained using a coarse-grained simulation approach, robustly predicted effects on expression of the integral membrane protein TatC for a set of 140 sequence modifications, including loop-swap chimeras and single-residue mutations distributed throughout the protein sequence. Mutations that improve simulated integration efficiency were 4-fold enriched with respect to improved experimentally observed expression levels. Furthermore, the effects of double mutations on both simulated integration efficiency and experimentally observed expression levels were cumulative and largely independent, suggesting that multiple mutations can be introduced to yield higher levels of purifiable protein. This work provides a foundation for a general method for the rational overexpression of integral membrane proteins based on computationally simulated membrane integration efficiencies. PMID:28918393

  12. FY11 Report on Metagenome Analysis using Pathogen Marker Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, Shea N.; Allen, Jonathan E.; McLoughlin, Kevin S.

    2011-06-02

    A method, sequence library, and software suite was invented to rapidly assess whether any member of a pre-specified list of threat organisms or their near neighbors is present in a metagenome. The system was designed to handle mega- to giga-bases of FASTA-formatted raw sequence reads from short or long read next generation sequencing platforms. The approach is to pre-calculate a viral and a bacterial "Pathogen Marker Library" (PML) containing sub-sequences specific to pathogens or their near neighbors. A list of expected matches comparing every bacterial or viral genome against the PML sequences is also pre-calculated. To analyze a metagenome, readsmore » are compared to the PML, and observed PML-metagenome matches are compared to the expected PML-genome matches, and the ratio of observed relative to expected matches is reported. In other words, a 3-way comparison among the PML, metagenome, and existing genome sequences is used to quickly assess which (if any) species included in the PML is likely to be present in the metagenome, based on available sequence data. Our tests showed that the species with the most PML matches correctly indicated the organism sequenced for empirical metagenomes consisting of a cultured, relatively pure isolate. These runs completed in 1 minute to 3 hours on 12 CPU (1 thread/CPU), depending on the metagenome and PML. Using more threads on the same number of CPU resulted in speed improvements roughly proportional to the number of threads. Simulations indicated that detection sensitivity depends on both sequencing coverage levels for a species and the size of the PML: species were correctly detected even at ~0.003x coverage by the large PMLs, and at ~0.03x coverage by the smaller PMLs. Matches to true positive species were 3-4 orders of magnitude higher than to false positives. Simulations with short reads (36 nt and ~260 nt) showed that species were usually detected for metagenome coverage above 0.005x and coverage in the PML above 0.05x, and detection probability appears to be a function of both coverages. Multiple species could be detected simultaneously in a simulated low-coverage, complex metagenome, and the largest PML gave no false negative species and no false positive genera. The presence of multiple species was predicted in a complex metagenome from a human gut microbiome with 1.9 GB of short reads (75 nt); the species predicted were reasonable gut flora and no biothreat agents were detected, showing the feasibility of PML analysis of empirical complex metagenomes.« less

  13. Limit cycles in piecewise-affine gene network models with multiple interaction loops

    NASA Astrophysics Data System (ADS)

    Farcot, Etienne; Gouzé, Jean-Luc

    2010-01-01

    In this article, we consider piecewise affine differential equations modelling gene networks. We work with arbitrary decay rates, and under a local hypothesis expressed as an alignment condition of successive focal points. The interaction graph of the system may be rather complex (multiple intricate loops of any sign, multiple thresholds, etc.). Our main result is an alternative theorem showing that if a sequence of region is periodically visited by trajectories, then under our hypotheses, there exists either a unique stable periodic solution, or the origin attracts all trajectories in this sequence of regions. This result extends greatly our previous work on a single negative feedback loop. We give several examples and simulations illustrating different cases.

  14. Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.

    PubMed

    Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban

    2015-07-20

    In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.

  15. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  16. Protein sequences clustering of herpes virus by using Tribe Markov clustering (Tribe-MCL)

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Siswantining, T.; Febriyani, N. L.; Novitasari, I. D.; Cahyaningrum, R. D.

    2017-07-01

    The herpes virus can be found anywhere and one of the important characteristics is its ability to cause acute and chronic infection at certain times so as a result of the infection allows severe complications occurred. The herpes virus is composed of DNA containing protein and wrapped by glycoproteins. In this work, the Herpes viruses family is classified and analyzed by clustering their protein-sequence using Tribe Markov Clustering (Tribe-MCL) algorithm. Tribe-MCL is an efficient clustering method based on the theory of Markov chains, to classify protein families from protein sequences using pre-computed sequence similarity information. We implement the Tribe-MCL algorithm using an open source program of R. We select 24 protein sequences of Herpes virus obtained from NCBI database. The dataset consists of three types of glycoprotein B, F, and H. Each type has eight herpes virus that infected humans. Based on our simulation using different inflation factor r=1.5, 2, 3 we find a various number of the clusters results. The greater the inflation factor the greater the number of their clusters. Each protein will grouped together in the same type of protein.

  17. Generating human-like movements on an anthropomorphic robot using an interior point method

    NASA Astrophysics Data System (ADS)

    Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.

    2013-10-01

    In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.

  18. Multibody Modeling and Simulation for the Mars Phoenix Lander Entry, Descent and Landing

    NASA Technical Reports Server (NTRS)

    Queen, Eric M.; Prince, Jill L.; Desai, Prasun N.

    2008-01-01

    A multi-body flight simulation for the Phoenix Mars Lander has been developed that includes high fidelity six degree-of-freedom rigid-body models for the parachute and lander system. The simulation provides attitude and rate history predictions of all bodies throughout the flight, as well as loads on each of the connecting lines. In so doing, a realistic behavior of the descending parachute/lander system dynamics can be simulated that allows assessment of the Phoenix descent performance and identification of potential sensitivities for landing. This simulation provides a complete end-to-end capability of modeling the entire entry, descent, and landing sequence for the mission. Time histories of the parachute and lander aerodynamic angles are presented. The response of the lander system to various wind models and wind shears is shown to be acceptable. Monte Carlo simulation results are also presented.

  19. Draft versus finished sequence data for DNA and protein diagnostic signature development

    PubMed Central

    Gardner, Shea N.; Lam, Marisa W.; Smith, Jason R.; Torres, Clinton L.; Slezak, Tom R.

    2005-01-01

    Sequencing pathogen genomes is costly, demanding careful allocation of limited sequencing resources. We built a computational Sequencing Analysis Pipeline (SAP) to guide decisions regarding the amount of genomic sequencing necessary to develop high-quality diagnostic DNA and protein signatures. SAP uses simulations to estimate the number of target genomes and close phylogenetic relatives (near neighbors or NNs) to sequence. We use SAP to assess whether draft data are sufficient or finished sequencing is required using Marburg and variola virus sequences. Simulations indicate that intermediate to high-quality draft with error rates of 10−3–10−5 (∼8× coverage) of target organisms is suitable for DNA signature prediction. Low-quality draft with error rates of ∼1% (3× to 6× coverage) of target isolates is inadequate for DNA signature prediction, although low-quality draft of NNs is sufficient, as long as the target genomes are of high quality. For protein signature prediction, sequencing errors in target genomes substantially reduce the detection of amino acid sequence conservation, even if the draft is of high quality. In summary, high-quality draft of target and low-quality draft of NNs appears to be a cost-effective investment for DNA signature prediction, but may lead to underestimation of predicted protein signatures. PMID:16243783

  20. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  1. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  2. Nursing Student Perceptions Regarding Simulation Experience Sequencing.

    PubMed

    Woda, Aimee A; Gruenke, Theresa; Alt-Gehrman, Penny; Hansen, Jamie

    2016-09-01

    The use of simulated learning experiences (SLEs) have increased within nursing curricula with positive learning outcomes for nursing students. The purpose of this study is to explore nursing students' perceptions of their clinical decision making (CDM) related to the block sequencing of different patient care experiences, SLEs versus hospital-based learning experiences (HLEs). A qualitative descriptive design used open-ended survey questions to generate information about the block sequencing of SLEs and its impact on nursing students' perceived CDM. Three themes emerged from the data: Preexperience Anxiety, Real-Time Decision Making, and Increased Patient Care Experiences. Nursing students identified that having SLEs prior to HLEs provided several benefits. Even when students preferred SLEs prior to HLEs, the sequence did not impact their CDM. This suggests that alternating block sequencing can be used without impacting the students' perceptions of their ability to make decisions. [J Nurs Educ. 2016;55(9):528-532.]. Copyright 2016, SLACK Incorporated.

  3. Alignment-free microbial phylogenomics under scenarios of sequence divergence, genome rearrangement and lateral genetic transfer.

    PubMed

    Bernard, Guillaume; Chan, Cheong Xin; Ragan, Mark A

    2016-07-01

    Alignment-free (AF) approaches have recently been highlighted as alternatives to methods based on multiple sequence alignment in phylogenetic inference. However, the sensitivity of AF methods to genome-scale evolutionary scenarios is little known. Here, using simulated microbial genome data we systematically assess the sensitivity of nine AF methods to three important evolutionary scenarios: sequence divergence, lateral genetic transfer (LGT) and genome rearrangement. Among these, AF methods are most sensitive to the extent of sequence divergence, less sensitive to low and moderate frequencies of LGT, and most robust against genome rearrangement. We describe the application of AF methods to three well-studied empirical genome datasets, and introduce a new application of the jackknife to assess node support. Our results demonstrate that AF phylogenomics is computationally scalable to multi-genome data and can generate biologically meaningful phylogenies and insights into microbial evolution.

  4. Multi-modulus algorithm based on global artificial fish swarm intelligent optimization of DNA encoding sequences.

    PubMed

    Guo, Y C; Wang, H; Wu, H P; Zhang, M Q

    2015-12-21

    Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.

  5. Analyzing ion distributions around DNA: sequence-dependence of potassium ion distributions from microsecond molecular dynamics

    PubMed Central

    Pasi, Marco; Maddocks, John H.; Lavery, Richard

    2015-01-01

    Microsecond molecular dynamics simulations of B-DNA oligomers carried out in an aqueous environment with a physiological salt concentration enable us to perform a detailed analysis of how potassium ions interact with the double helix. The oligomers studied contain all 136 distinct tetranucleotides and we are thus able to make a comprehensive analysis of base sequence effects. Using a recently developed curvilinear helicoidal coordinate method we are able to analyze the details of ion populations and densities within the major and minor grooves and in the space surrounding DNA. The results show higher ion populations than have typically been observed in earlier studies and sequence effects that go beyond the nature of individual base pairs or base pair steps. We also show that, in some special cases, ion distributions converge very slowly and, on a microsecond timescale, do not reflect the symmetry of the corresponding base sequence. PMID:25662221

  6. snpAD: An ancient DNA genotype caller.

    PubMed

    Prüfer, Kay

    2018-06-21

    The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.

  7. Limiting vibration in systems with constant amplitude actuators through command preshaping. M.S Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Rogers, Keith Eric

    1994-01-01

    The basic concepts of command preshaping were taken and adapted to the framework of systems with constant amplitude (on-off) actuators. In this context, pulse sequences were developed which help to attenuate vibration in flexible systems with high robustness to errors in frequency identification. Sequences containing impulses of different magnitudes were approximated by sequences containing pulses of different durations. The effects of variation in pulse width on this approximation were examined. Sequences capable of minimizing loads induced in flexible systems during execution of commands were also investigated. The usefulness of these techniques in real-world situations was verified by application to a high fidelity simulation of the space shuttle. Results showed that constant amplitude preshaping techniques offer a substantial improvement in vibration reduction over both the standard and upgraded shuttle control methods and may be mission enabling for use of the shuttle with extremely massive payloads.

  8. Denoising time-resolved microscopy image sequences with singular value thresholding.

    PubMed

    Furnival, Tom; Leary, Rowan K; Midgley, Paul A

    2017-07-01

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Mapping Base Modifications in DNA by Transverse-Current Sequencing

    NASA Astrophysics Data System (ADS)

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2018-02-01

    Sequencing DNA modifications and lesions, such as methylation of cytosine and oxidation of guanine, is even more important and challenging than sequencing the genome itself. The traditional methods for detecting DNA modifications are either insensitive to these modifications or require additional processing steps to identify a particular type of modification. Transverse-current sequencing in nanopores can potentially identify the canonical bases and base modifications in the same run. In this work, we demonstrate that the most common DNA epigenetic modifications and lesions can be detected with any predefined accuracy based on their tunneling current signature. Our results are based on simulations of the nanopore tunneling current through DNA molecules, calculated using nonequilibrium electron-transport methodology within an effective multiorbital model derived from first-principles calculations, followed by a base-calling algorithm accounting for neighbor current-current correlations. This methodology can be integrated with existing experimental techniques to improve base-calling fidelity.

  10. Analyzing the relationship between sequence divergence and nodal support using Bayesian phylogenetic analyses.

    PubMed

    Makowsky, Robert; Cox, Christian L; Roelke, Corey; Chippindale, Paul T

    2010-11-01

    Determining the appropriate gene for phylogeny reconstruction can be a difficult process. Rapidly evolving genes tend to resolve recent relationships, but suffer from alignment issues and increased homoplasy among distantly related species. Conversely, slowly evolving genes generally perform best for deeper relationships, but lack sufficient variation to resolve recent relationships. We determine the relationship between sequence divergence and Bayesian phylogenetic reconstruction ability using both natural and simulated datasets. The natural data are based on 28 well-supported relationships within the subphylum Vertebrata. Sequences of 12 genes were acquired and Bayesian analyses were used to determine phylogenetic support for correct relationships. Simulated datasets were designed to determine whether an optimal range of sequence divergence exists across extreme phylogenetic conditions. Across all genes we found that an optimal range of divergence for resolving the correct relationships does exist, although this level of divergence expectedly depends on the distance metric. Simulated datasets show that an optimal range of sequence divergence exists across diverse topologies and models of evolution. We determine that a simple to measure property of genetic sequences (genetic distance) is related to phylogenic reconstruction ability in Bayesian analyses. This information should be useful for selecting the most informative gene to resolve any relationships, especially those that are difficult to resolve, as well as minimizing both cost and confounding information during project design. Copyright © 2010. Published by Elsevier Inc.

  11. Design of Protein Multi-specificity Using an Independent Sequence Search Reduces the Barrier to Low Energy Sequences

    PubMed Central

    Sevy, Alexander M.; Jacobs, Tim M.; Crowe, James E.; Meiler, Jens

    2015-01-01

    Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a ‘single state’ design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design “promiscuous”, polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes. PMID:26147100

  12. Pulse Sequence Programming in a Dynamic Visual Environment: SequenceTree

    PubMed Central

    Magland, Jeremy F.; Li, Cheng; Langham, Michael C.; Wehrli, Felix W.

    2015-01-01

    Purpose To describe SequenceTree (ST), an open source. integrated software environment for implementing MRI pulse sequences, and ideally exported them to actual MRI scanners. The software is a user-friendly alternative to vendor-supplied pulse sequence design and editing tools and is suited for non-programmers and programmers alike. Methods The integrated user interface was programmed using the Qt4/C++ toolkit. As parameters and code are modified, the pulse sequence diagram is automatically updated within the user interface. Several aspects of pulse programming are handled automatically allowing users to focus on higher-level aspects of sequence design. Sequences can be simulated using a built-in Bloch equation solver and then exported for use on a Siemens MRI scanner. Ideally other types of scanners will be supported in the future. Results The software has been used for eight years in the authors’ laboratory and elsewhere and has been utilized in more than fifty peer-reviewed publications in areas such as cardiovascular imaging, solid state and non-proton NMR, MR elastography, and high resolution structural imaging. Conclusion ST is an innovative, open source, visual pulse sequence environment for MRI combining simplicity with flexibility and is ideal for both advanced users and those with limited programming experience. PMID:25754837

  13. Experimental and Numerical Studies on Fiber Deformation and Formability in Thermoforming Process Using a Fast-Cure Carbon Prepreg: Effect of Stacking Sequence and Mold Geometry.

    PubMed

    Bae, Daeryeong; Kim, Shino; Lee, Wonoh; Yi, Jin Woo; Um, Moon Kwang; Seong, Dong Gi

    2018-05-21

    A fast-cure carbon fiber/epoxy prepreg was thermoformed against a replicated automotive roof panel mold (square-cup) to investigate the effect of the stacking sequence of prepreg layers with unidirectional and plane woven fabrics and mold geometry with different drawing angles and depths on the fiber deformation and formability of the prepreg. The optimum forming condition was determined via analysis of the material properties of epoxy resin. The non-linear mechanical properties of prepreg at the deformation modes of inter- and intra-ply shear, tensile and bending were measured to be used as input data for the commercial virtual forming simulation software. The prepreg with a stacking sequence containing the plain-woven carbon prepreg on the outer layer of the laminate was successfully thermoformed against a mold with a depth of 20 mm and a tilting angle of 110°. Experimental results for the shear deformations at each corner of the thermoformed square-cup product were compared with the simulation and a similarity in the overall tendency of the shear angle in the path at each corner was observed. The results are expected to contribute to the optimization of parameters on materials, mold design and processing in the thermoforming mass-production process for manufacturing high quality automotive parts with a square-cup geometry.

  14. Experimental and Numerical Studies on Fiber Deformation and Formability in Thermoforming Process Using a Fast-Cure Carbon Prepreg: Effect of Stacking Sequence and Mold Geometry

    PubMed Central

    Bae, Daeryeong; Kim, Shino; Lee, Wonoh; Yi, Jin Woo; Um, Moon Kwang; Seong, Dong Gi

    2018-01-01

    A fast-cure carbon fiber/epoxy prepreg was thermoformed against a replicated automotive roof panel mold (square-cup) to investigate the effect of the stacking sequence of prepreg layers with unidirectional and plane woven fabrics and mold geometry with different drawing angles and depths on the fiber deformation and formability of the prepreg. The optimum forming condition was determined via analysis of the material properties of epoxy resin. The non-linear mechanical properties of prepreg at the deformation modes of inter- and intra-ply shear, tensile and bending were measured to be used as input data for the commercial virtual forming simulation software. The prepreg with a stacking sequence containing the plain-woven carbon prepreg on the outer layer of the laminate was successfully thermoformed against a mold with a depth of 20 mm and a tilting angle of 110°. Experimental results for the shear deformations at each corner of the thermoformed square-cup product were compared with the simulation and a similarity in the overall tendency of the shear angle in the path at each corner was observed. The results are expected to contribute to the optimization of parameters on materials, mold design and processing in the thermoforming mass-production process for manufacturing high quality automotive parts with a square-cup geometry. PMID:29883413

  15. A fault injection experiment using the AIRLAB Diagnostic Emulation Facility

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Mangum, Scott; Scheper, Charlotte

    1988-01-01

    The preparation for, conduct of, and results of a simulation based fault injection experiment conducted using the AIRLAB Diagnostic Emulation facilities is described. An objective of this experiment was to determine the effectiveness of the diagnostic self-test sequences used to uncover latent faults in a logic network providing the key fault tolerance features for a flight control computer. Another objective was to develop methods, tools, and techniques for conducting the experiment. More than 1600 faults were injected into a logic gate level model of the Data Communicator/Interstage (C/I). For each fault injected, diagnostic self-test sequences consisting of over 300 test vectors were supplied to the C/I model as inputs. For each test vector within a test sequence, the outputs from the C/I model were compared to the outputs of a fault free C/I. If the outputs differed, the fault was considered detectable for the given test vector. These results were then analyzed to determine the effectiveness of some test sequences. The results established coverage of selt-test diagnostics, identified areas in the C/I logic where the tests did not locate faults, and suggest fault latency reduction opportunities.

  16. The impact of simulation sequencing on perceived clinical decision making.

    PubMed

    Woda, Aimee; Hansen, Jamie; Paquette, Mary; Topp, Robert

    2017-09-01

    An emerging nursing education trend is to utilize simulated learning experiences as a means to optimize competency and decision making skills. The purpose of this study was to examine differences in students' perception of clinical decision making and clinical decision making-related self-confidence and anxiety based on the sequence (order) in which they participated in a block of simulated versus hospital-based learning experiences. A quasi-experimental crossover design was used. Between and within group differences were found relative to self-confidence with the decision making process. When comparing groups, at baseline the simulation followed by hospital group had significantly higher self-confidence scores, however, at 14-weeks both groups were not significantly different. Significant within group differences were found in the simulation followed by hospital group only, demonstrating a significant decrease in clinical decision making related anxiety across the semester. Finally, there were no significant difference in; perceived clinical decision making within or between the groups at the two measurement points. Preliminary findings suggest that simulated learning experiences can be offered with alternating sequences without impacting the process, anxiety or confidence with clinical decision making. This study provides beginning evidence to guide curriculum development and allow flexibility based on student needs and available resources. Copyright © 2017. Published by Elsevier Ltd.

  17. Synchronous Parallel Emulation and Discrete Event Simulation System with Self-Contained Simulation Objects and Active Event Objects

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S. (Inventor)

    1998-01-01

    The present invention is embodied in a method of performing object-oriented simulation and a system having inter-connected processor nodes operating in parallel to simulate mutual interactions of a set of discrete simulation objects distributed among the nodes as a sequence of discrete events changing state variables of respective simulation objects so as to generate new event-defining messages addressed to respective ones of the nodes. The object-oriented simulation is performed at each one of the nodes by assigning passive self-contained simulation objects to each one of the nodes, responding to messages received at one node by generating corresponding active event objects having user-defined inherent capabilities and individual time stamps and corresponding to respective events affecting one of the passive self-contained simulation objects of the one node, restricting the respective passive self-contained simulation objects to only providing and receiving information from die respective active event objects, requesting information and changing variables within a passive self-contained simulation object by the active event object, and producing corresponding messages specifying events resulting therefrom by the active event objects.

  18. An ecological vegetation-activated sludge process (V-ASP) for decentralized wastewater treatment: system development, treatment performance, and mathematical modeling.

    PubMed

    Yuan, Jiajia; Dong, Wenyi; Sun, Feiyun; Li, Pu; Zhao, Ke

    2016-05-01

    An environment-friendly decentralized wastewater treatment process that is comprised of activated sludge process (ASP) and wetland vegetation, named as vegetation-activated sludge process (V-ASP), was developed for decentralized wastewater treatment. The long-term experimental results evidenced that the vegetation sequencing batch reactor (V-SBR) process had consistently stable higher removal efficiencies of organic substances and nutrients from domestic wastewater compared with traditional sequencing batch reactor (SBR). The vegetation allocated into V-SBR system could not only remove nutrients through its vegetation transpiration ratio but also provide great surface area for microorganism activity enhancement. This high vegetation transpiration ratio enhanced nutrients removal effectiveness from wastewater mainly by flux enhancement, oxygen and substrate transportation acceleration, and vegetation respiration stimulation. A mathematical model based on ASM2d was successfully established by involving the specific function of vegetation to simulate system performance. The simulation results on the influence of operational parameters on V-ASP treatment effectiveness demonstrated that V-SBR had a high resistance to seasonal temperature fluctuations and influent loading shocking.

  19. Combinatorial Pooling Enables Selective Sequencing of the Barley Gene Space

    PubMed Central

    Lonardi, Stefano; Duma, Denisa; Alpert, Matthew; Cordero, Francesca; Beccuti, Marco; Bhat, Prasanna R.; Wu, Yonghui; Ciardo, Gianfranco; Alsaihati, Burair; Ma, Yaqin; Wanamaker, Steve; Resnik, Josh; Bozdag, Serdar; Luo, Ming-Cheng; Close, Timothy J.

    2013-01-01

    For the vast majority of species – including many economically or ecologically important organisms, progress in biological research is hampered due to the lack of a reference genome sequence. Despite recent advances in sequencing technologies, several factors still limit the availability of such a critical resource. At the same time, many research groups and international consortia have already produced BAC libraries and physical maps and now are in a position to proceed with the development of whole-genome sequences organized around a physical map anchored to a genetic map. We propose a BAC-by-BAC sequencing protocol that combines combinatorial pooling design and second-generation sequencing technology to efficiently approach denovo selective genome sequencing. We show that combinatorial pooling is a cost-effective and practical alternative to exhaustive DNA barcoding when preparing sequencing libraries for hundreds or thousands of DNA samples, such as in this case gene-bearing minimum-tiling-path BAC clones. The novelty of the protocol hinges on the computational ability to efficiently compare hundred millions of short reads and assign them to the correct BAC clones (deconvolution) so that the assembly can be carried out clone-by-clone. Experimental results on simulated data for the rice genome show that the deconvolution is very accurate, and the resulting BAC assemblies have high quality. Results on real data for a gene-rich subset of the barley genome confirm that the deconvolution is accurate and the BAC assemblies have good quality. While our method cannot provide the level of completeness that one would achieve with a comprehensive whole-genome sequencing project, we show that it is quite successful in reconstructing the gene sequences within BACs. In the case of plants such as barley, this level of sequence knowledge is sufficient to support critical end-point objectives such as map-based cloning and marker-assisted breeding. PMID:23592960

  20. Combinatorial pooling enables selective sequencing of the barley gene space.

    PubMed

    Lonardi, Stefano; Duma, Denisa; Alpert, Matthew; Cordero, Francesca; Beccuti, Marco; Bhat, Prasanna R; Wu, Yonghui; Ciardo, Gianfranco; Alsaihati, Burair; Ma, Yaqin; Wanamaker, Steve; Resnik, Josh; Bozdag, Serdar; Luo, Ming-Cheng; Close, Timothy J

    2013-04-01

    For the vast majority of species - including many economically or ecologically important organisms, progress in biological research is hampered due to the lack of a reference genome sequence. Despite recent advances in sequencing technologies, several factors still limit the availability of such a critical resource. At the same time, many research groups and international consortia have already produced BAC libraries and physical maps and now are in a position to proceed with the development of whole-genome sequences organized around a physical map anchored to a genetic map. We propose a BAC-by-BAC sequencing protocol that combines combinatorial pooling design and second-generation sequencing technology to efficiently approach denovo selective genome sequencing. We show that combinatorial pooling is a cost-effective and practical alternative to exhaustive DNA barcoding when preparing sequencing libraries for hundreds or thousands of DNA samples, such as in this case gene-bearing minimum-tiling-path BAC clones. The novelty of the protocol hinges on the computational ability to efficiently compare hundred millions of short reads and assign them to the correct BAC clones (deconvolution) so that the assembly can be carried out clone-by-clone. Experimental results on simulated data for the rice genome show that the deconvolution is very accurate, and the resulting BAC assemblies have high quality. Results on real data for a gene-rich subset of the barley genome confirm that the deconvolution is accurate and the BAC assemblies have good quality. While our method cannot provide the level of completeness that one would achieve with a comprehensive whole-genome sequencing project, we show that it is quite successful in reconstructing the gene sequences within BACs. In the case of plants such as barley, this level of sequence knowledge is sufficient to support critical end-point objectives such as map-based cloning and marker-assisted breeding.

  1. Structure and specificity of the RNA-guided endonuclease Cas9 during DNA interrogation, target binding and cleavage

    PubMed Central

    Josephs, Eric A.; Kocak, D. Dewran; Fitzgibbon, Christopher J.; McMenemy, Joshua; Gersbach, Charles A.; Marszalek, Piotr E.

    2015-01-01

    CRISPR-associated endonuclease Cas9 cuts DNA at variable target sites designated by a Cas9-bound RNA molecule. Cas9's ability to be directed by single ‘guide RNA’ molecules to target nearly any sequence has been recently exploited for a number of emerging biological and medical applications. Therefore, understanding the nature of Cas9's off-target activity is of paramount importance for its practical use. Using atomic force microscopy (AFM), we directly resolve individual Cas9 and nuclease-inactive dCas9 proteins as they bind along engineered DNA substrates. High-resolution imaging allows us to determine their relative propensities to bind with different guide RNA variants to targeted or off-target sequences. Mapping the structural properties of Cas9 and dCas9 to their respective binding sites reveals a progressive conformational transformation at DNA sites with increasing sequence similarity to its target. With kinetic Monte Carlo (KMC) simulations, these results provide evidence of a ‘conformational gating’ mechanism driven by the interactions between the guide RNA and the 14th–17th nucleotide region of the targeted DNA, the stabilities of which we find correlate significantly with reported off-target cleavage rates. KMC simulations also reveal potential methodologies to engineer guide RNA sequences with improved specificity by considering the invasion of guide RNAs into targeted DNA duplex. PMID:26384421

  2. Discriminating between stabilizing and destabilizing protein design mutations via recombination and simulation.

    PubMed

    Johnson, Lucas B; Gintner, Lucas P; Park, Sehoo; Snow, Christopher D

    2015-08-01

    Accuracy of current computational protein design (CPD) methods is limited by inherent approximations in energy potentials and sampling. These limitations are often used to qualitatively explain design failures; however, relatively few studies provide specific examples or quantitative details that can be used to improve future CPD methods. Expanding the design method to include a library of sequences provides data that is well suited for discriminating between stabilizing and destabilizing design elements. Using thermophilic endoglucanase E1 from Acidothermus cellulolyticus as a model enzyme, we computationally designed a sequence with 60 mutations. The design sequence was rationally divided into structural blocks and recombined with the wild-type sequence. Resulting chimeras were assessed for activity and thermostability. Surprisingly, unlike previous chimera libraries, regression analysis based on one- and two-body effects was not sufficient for predicting chimera stability. Analysis of molecular dynamics simulations proved helpful in distinguishing stabilizing and destabilizing mutations. Reverting to the wild-type amino acid at destabilized sites partially regained design stability, and introducing predicted stabilizing mutations in wild-type E1 significantly enhanced thermostability. The ability to isolate stabilizing and destabilizing elements in computational design offers an opportunity to interpret previous design failures and improve future CPD methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Molecular dynamics simulations of certain RGD-based peptides from Kistrin provide insight into the higher activity of REI-RGD34 protein at higher temperature.

    PubMed

    Upadhyay, Sanjay K

    2014-05-01

    To determine the bioactive conformation required to bind with receptor aIIbb3, the peptide sequence RIPRGDMP from Kistrin was inserted into CDR 1 loop region of REI protein, resulting in REI-RGD34. The activity of REI-RGD34 was observed to increase at higher temperature towards the receptor aIIbb3. It could be justified in either way: the modified complex forces the restricted peptide to adapt bioactive conformation or it unfolds the peptide in a way that opens its binding surface with high affinity for receptor. Here, we model the conformational preference of RGD sequence in RIPRGDMP at 25 and 42 °C using multiple MD simulations. Further, we model the peptide sequence RGD, PRGD and PRGDMP from kistrin to observe the effect of flanking residues on conformational sampling of RGD. The presence of flanking residues around RGD peptide greatly influenced the conformational sampling. A transition from bend to turn conformation was observed for RGD sequence at 42 °C. The turn conformation shows pharmacophoric parameters required to recognize the receptor aIIbb3. Thus, the temperaturedependent activity of RIPRGDMP when inserted into the loop region of REI can be explained by the presence of the turn conformation. This study will help in designing potential antagonist for the receptor aIIbb3.

  4. TU-H-CAMPUS-JeP3-05: Adaptive Determination of Needle Sequence HDR Prostate Brachytherapy with Divergent Needle-By-Needle Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borot de Battisti, M; Maenhout, M; Lagendijk, J J W

    Purpose: To develop a new method which adaptively determines the optimal needle insertion sequence for HDR prostate brachytherapy involving divergent needle-by-needle dose delivery by e.g. a robotic device. A needle insertion sequence is calculated at the beginning of the intervention and updated after each needle insertion with feedback on needle positioning errors. Methods: Needle positioning errors and anatomy changes may occur during HDR brachytherapy which can lead to errors in the delivered dose. A novel strategy was developed to calculate and update the needle sequence and the dose plan after each needle insertion with feedback on needle positioning errors. Themore » dose plan optimization was performed by numerical simulations. The proposed needle sequence determination optimizes the final dose distribution based on the dose coverage impact of each needle. This impact is predicted stochastically by needle insertion simulations. HDR procedures were simulated with varying number of needle insertions (4 to 12) using 11 patient MR data-sets with PTV, prostate, urethra, bladder and rectum delineated. Needle positioning errors were modeled by random normally distributed angulation errors (standard deviation of 3 mm at the needle’s tip). The final dose parameters were compared in the situations where the needle with the largest vs. the smallest dose coverage impact was selected at each insertion. Results: Over all scenarios, the percentage of clinically acceptable final dose distribution improved when the needle selected had the largest dose coverage impact (91%) compared to the smallest (88%). The differences were larger for few (4 to 6) needle insertions (maximum difference scenario: 79% vs. 60%). The computation time of the needle sequence optimization was below 60s. Conclusion: A new adaptive needle sequence determination for HDR prostate brachytherapy was developed. Coupled to adaptive planning, the selection of the needle with the largest dose coverage impact increases chances of reaching the clinical constraints. M. Borot de Battisti is funded by Philips Medical Systems Nederland B.V.; M. Moerland is principal investigator on a contract funded by Philips Medical Systems Nederland B.V.; G. Hautvast and D. Binnekamp are fulltime employees of Philips Medical Systems Nederland B.V.« less

  5. Observing complex action sequences: The role of the fronto-parietal mirror neuron system.

    PubMed

    Molnar-Szakacs, Istvan; Kaplan, Jonas; Greenfield, Patricia M; Iacoboni, Marco

    2006-11-15

    A fronto-parietal mirror neuron network in the human brain supports the ability to represent and understand observed actions allowing us to successfully interact with others and our environment. Using functional magnetic resonance imaging (fMRI), we wanted to investigate the response of this network in adults during observation of hierarchically organized action sequences of varying complexity that emerge at different developmental stages. We hypothesized that fronto-parietal systems may play a role in coding the hierarchical structure of object-directed actions. The observation of all action sequences recruited a common bilateral network including the fronto-parietal mirror neuron system and occipito-temporal visual motion areas. Activity in mirror neuron areas varied according to the motoric complexity of the observed actions, but not according to the developmental sequence of action structures, possibly due to the fact that our subjects were all adults. These results suggest that the mirror neuron system provides a fairly accurate simulation process of observed actions, mimicking internally the level of motoric complexity. We also discuss the results in terms of the links between mirror neurons, language development and evolution.

  6. Stability of recursive out-of-sequence measurement filters: an open problem

    NASA Astrophysics Data System (ADS)

    Chen, Lingji; Moshtagh, Nima; Mehra, Raman K.

    2011-06-01

    In many applications where communication delays are present, measurements with earlier time stamps can arrive out-of-sequence, i.e., after state estimates have been obtained for the current time instant. To incorporate such an Out-Of-Sequence Measurement (OOSM), many algorithms have been proposed in the literature to obtain or approximate the optimal estimate that would have been obtained if the OOSM had arrived in-sequence. When OOSM occurs repeatedly, approximate estimations as a result of incorporating one OOSM have to serve as the basis for incorporating yet another OOSM. The question of whether the "approximation of approximation" is well behaved, i.e., whether approximation errors accumulate in a recursive setting, has not been adequately addressed in the literature. This paper draws attention to the stability question of recursive OOSM processing filters, formulates the problem in a specific setting, and presents some simulation results that suggest that such filters are indeed well-behaved. Our hope is that more research will be conducted in the future to rigorously establish stability properties of these filters.

  7. Cavitation-induced fragmentation of an acoustically-levitated droplet

    NASA Astrophysics Data System (ADS)

    Gonzalez Avila, Silvestre Roberto; Ohl, Claus-Dieter

    2015-12-01

    In this paper we investigate the initial sequence of events that lead to the fragmentation of a millimetre sized water droplets when interacting with a focused ns-laser pulse. The experimental results show complex processes that result from the reflection of an initial shock wave from plasma generation with the soft boundary of the levitating droplet; furthermore, when the reflected waves from the walls of the droplet refocus they leave behind a trail of microbubbles that later act as cavitation inception regions. Numerical simulations of a shock wave impacting and reflecting from a soft boundary are also reported; the simulated results show that the lowest pressure inside the droplet occurs at the equatorial plane. The results of the numerical model display good agreement with the experimental results both in time and in space.

  8. Molecular dynamics studies of the 3D structure and planar ligand binding of a quadruplex dimer.

    PubMed

    Li, Ming-Hui; Luo, Quan; Xue, Xiang-Gui; Li, Ze-Sheng

    2011-03-01

    G-rich sequences can fold into a four-stranded structure called a G-quadruplex, and sequences with short loops are able to aggregate to form stable quadruplex multimers. Few studies have characterized the properties of this variety of quadruplex multimers. Using molecular modeling and molecular dynamics simulations, the present study investigated a dimeric G-quadruplex structure formed from a simple sequence of d(GGGTGGGTGGGTGGGT) (G1), and its interactions with a planar ligand of a perylene derivative (Tel03). A series of analytical methods, including free energy calculations and principal components analysis (PCA), was used. The results show that a dimer structure with stacked parallel monomer structures is maintained well during the entire simulation. Tel03 can bind to the dimer efficiently through end stacking, and the binding mode of the ligand stacked with the 3'-terminal thymine base is most favorable. PCA showed that the dominant motions in the free dimer occur on the loop regions, and the presence of the ligand reduces the flexibility of the loops. Our investigation will assist in understanding the geometric structure of stacked G-quadruplex multimers and may be helpful as a platform for rational drug design.

  9. Some observations on mesh refinement schemes applied to shock wave phenomena

    NASA Technical Reports Server (NTRS)

    Quirk, James J.

    1995-01-01

    This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.

  10. Mixed-mode oscillations in memristor emulator based Liénard system

    NASA Astrophysics Data System (ADS)

    Kingston, S. Leo; Suresh, K.; Thamilmaran, K.

    2018-04-01

    We report the existence of mixed-mode oscillations in memristor emulator based Liénard system which is externally driven by sinusoidal force. The charge and flux relationship of memristor emulator device explored based on the smooth cubic nonlinear element. The system exhibits the successive period adding sequences of mixed-mode oscillations in the wide parameter region. The electronics circuit of the memristor emulator is successfully implemented through PSpice simulation and mixed mode oscillations are observed through PSpice experiment and the obtained results are qualitatively matches with the numerical simulation.

  11. Periodic sequence of stabilized wave segments in an excitable medium

    NASA Astrophysics Data System (ADS)

    Zykov, V. S.; Bodenschatz, E.

    2018-03-01

    Numerical computations show that a stabilization of a periodic sequence of wave segments propagating through an excitable medium is possible only in a restricted domain within the parameter space. By application of a free-boundary approach, we demonstrate that at the boundary of this domain the parameter H introduced in our Rapid Communication is constant. We show also that the discovered parameter predetermines the propagation velocity and the shape of the wave segments. The predictions of the free-boundary approach are in good quantitative agreement with results from numerical reaction-diffusion simulations performed on the modified FitzHugh-Nagumo model.

  12. A 2-step penalized regression method for family-based next-generation sequencing association studies.

    PubMed

    Ding, Xiuhua; Su, Shaoyong; Nandakumar, Kannabiran; Wang, Xiaoling; Fardo, David W

    2014-01-01

    Large-scale genetic studies are often composed of related participants, and utilizing familial relationships can be cumbersome and computationally challenging. We present an approach to efficiently handle sequencing data from complex pedigrees that incorporates information from rare variants as well as common variants. Our method employs a 2-step procedure that sequentially regresses out correlation from familial relatedness and then uses the resulting phenotypic residuals in a penalized regression framework to test for associations with variants within genetic units. The operating characteristics of this approach are detailed using simulation data based on a large, multigenerational cohort.

  13. Facilitated sequence counting and assembly by template mutagenesis

    PubMed Central

    Levy, Dan; Wigler, Michael

    2014-01-01

    Presently, inferring the long-range structure of the DNA templates is limited by short read lengths. Accurate template counts suffer from distortions occurring during PCR amplification. We explore the utility of introducing random mutations in identical or nearly identical templates to create distinguishable patterns that are inherited during subsequent copying. We simulate the applications of this process under assumptions of error-free sequencing and perfect mapping, using cytosine deamination as a model for mutation. The simulations demonstrate that within readily achievable conditions of nucleotide conversion and sequence coverage, we can accurately count the number of otherwise identical molecules as well as connect variants separated by long spans of identical sequence. We discuss many potential applications, such as transcript profiling, isoform assembly, haplotype phasing, and de novo genome assembly. PMID:25313059

  14. Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes

    NASA Astrophysics Data System (ADS)

    Farzan Sabahi, Mohammad; Dehghanfard, Ali

    2014-12-01

    The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.

  15. Graphene Nanopores for Protein Sequencing.

    PubMed

    Wilson, James; Sloman, Leila; He, Zhiren; Aksimentiev, Aleksei

    2016-07-19

    An inexpensive, reliable method for protein sequencing is essential to unraveling the biological mechanisms governing cellular behavior and disease. Current protein sequencing methods suffer from limitations associated with the size of proteins that can be sequenced, the time, and the cost of the sequencing procedures. Here, we report the results of all-atom molecular dynamics simulations that investigated the feasibility of using graphene nanopores for protein sequencing. We focus our study on the biologically significant phenylalanine-glycine repeat peptides (FG-nups)-parts of the nuclear pore transport machinery. Surprisingly, we found FG-nups to behave similarly to single stranded DNA: the peptides adhere to graphene and exhibit step-wise translocation when subject to a transmembrane bias or a hydrostatic pressure gradient. Reducing the peptide's charge density or increasing the peptide's hydrophobicity was found to decrease the translocation speed. Yet, unidirectional and stepwise translocation driven by a transmembrane bias was observed even when the ratio of charged to hydrophobic amino acids was as low as 1:8. The nanopore transport of the peptides was found to produce stepwise modulations of the nanopore ionic current correlated with the type of amino acids present in the nanopore, suggesting that protein sequencing by measuring ionic current blockades may be possible.

  16. Multiple Teaching Approaches, Teaching Sequence and Concept Retention in High School Physics Education

    ERIC Educational Resources Information Center

    Fogarty, Ian; Geelan, David

    2013-01-01

    Students in 4 Canadian high school physics classes completed instructional sequences in two key physics topics related to motion--Straight Line Motion and Newton's First Law. Different sequences of laboratory investigation, teacher explanation (lecture) and the use of computer-based scientific visualizations (animations and simulations) were…

  17. Optimization of a double inversion recovery sequence for noninvasive synovium imaging of joint effusion in the knee.

    PubMed

    Jahng, Geon-Ho; Jin, Wook; Yang, Dal Mo; Ryu, Kyung Nam

    2011-05-01

    We wanted to optimize a double inversion recovery (DIR) sequence to image joint effusion regions of the knee, especially intracapsular or intrasynovial imaging in the suprapatellar bursa and patellofemoral joint space. Computer simulations were performed to determine the optimum inversion times (TI) for suppressing both fat and water signals, and a DIR sequence was optimized based on the simulations for distinguishing synovitis from fluid. In vivo studies were also performed on individuals who showed joint effusion on routine knee MR images to demonstrate the feasibility of using the DIR sequence with a 3T whole-body MR scanner. To compare intracapsular or intrasynovial signals on the DIR images, intermediate density-weighted images and/or post-enhanced T1-weighted images were acquired. The timings to enhance the synovial contrast from the fluid components were TI1 = 2830 ms and TI2 = 254 ms for suppressing the water and fat signals, respectively. Improved contrast for the intrasynovial area in the knees was observed with the DIR turbo spin-echo pulse sequence compared to the intermediate density-weighted sequence. Imaging contrast obtained noninvasively with the DIR sequence was similar to that of the post-enhanced T1-weighted sequence. The DIR sequence may be useful for delineating synovium without using contrast materials.

  18. Sanger Confirmation Is Required to Achieve Optimal Sensitivity and Specificity in Next-Generation Sequencing Panel Testing.

    PubMed

    Mu, Wenbo; Lu, Hsiao-Mei; Chen, Jefferey; Li, Shuwei; Elliott, Aaron M

    2016-11-01

    Next-generation sequencing (NGS) has rapidly replaced Sanger sequencing as the method of choice for diagnostic gene-panel testing. For hereditary-cancer testing, the technical sensitivity and specificity of the assay are paramount as clinicians use results to make important clinical management and treatment decisions. There is significant debate within the diagnostics community regarding the necessity of confirming NGS variant calls by Sanger sequencing, considering that numerous laboratories report having 100% specificity from the NGS data alone. Here we report our results from 20,000 hereditary-cancer NGS panels spanning 47 genes, in which all 7845 nonpolymorphic variants were Sanger- sequenced. Of these, 98.7% were concordant between NGS and Sanger sequencing and 1.3% were identified as NGS false-positives, located mainly in complex genomic regions (A/T-rich regions, G/C-rich regions, homopolymer stretches, and pseudogene regions). Simulating a false-positive rate of zero by adjusting the variant-calling quality-score thresholds decreased the sensitivity of the assay from 100% to 97.8%, resulting in the missed detection of 176 Sanger-confirmed variants, the majority in complex genomic regions (n = 114) and mosaic mutations (n = 7). The data illustrate the importance of setting quality thresholds for panel testing only after thousands of samples have been processed and the necessity of Sanger confirmation of NGS variants to maintain the highest possible sensitivity. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  19. Efficient dynamical correction of the transition state theory rate estimate for a flat energy barrier.

    PubMed

    Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes

    2016-09-07

    The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.

  20. Computational analysis of stochastic heterogeneity in PCR amplification efficiency revealed by single molecule barcoding

    PubMed Central

    Best, Katharine; Oakes, Theres; Heather, James M.; Shawe-Taylor, John; Chain, Benny

    2015-01-01

    The polymerase chain reaction (PCR) is one of the most widely used techniques in molecular biology. In combination with High Throughput Sequencing (HTS), PCR is widely used to quantify transcript abundance for RNA-seq, and in the context of analysis of T and B cell receptor repertoires. In this study, we combine DNA barcoding with HTS to quantify PCR output from individual target molecules. We develop computational tools that simulate both the PCR branching process itself, and the subsequent subsampling which typically occurs during HTS sequencing. We explore the influence of different types of heterogeneity on sequencing output, and compare them to experimental results where the efficiency of amplification is measured by barcodes uniquely identifying each molecule of starting template. Our results demonstrate that the PCR process introduces substantial amplification heterogeneity, independent of primer sequence and bulk experimental conditions. This heterogeneity can be attributed both to inherited differences between different template DNA molecules, and the inherent stochasticity of the PCR process. The results demonstrate that PCR heterogeneity arises even when reaction and substrate conditions are kept as constant as possible, and therefore single molecule barcoding is essential in order to derive reproducible quantitative results from any protocol combining PCR with HTS. PMID:26459131

  1. A Six Nuclear Gene Phylogeny of Citrus (Rutaceae) Taking into Account Hybridization and Lineage Sorting

    PubMed Central

    Keremane, Manjunath L.; Lee, Richard F.; Maureira-Butler, Ivan J.; Roose, Mikeal L.

    2013-01-01

    Background Genus Citrus (Rutaceae) comprises many important cultivated species that generally hybridize easily. Phylogenetic study of a group showing extensive hybridization is challenging. Since the genus Citrus has diverged recently (4–12 Ma), incomplete lineage sorting of ancestral polymorphisms is also likely to cause discrepancies among genes in phylogenetic inferences. Incongruence of gene trees is observed and it is essential to unravel the processes that cause inconsistencies in order to understand the phylogenetic relationships among the species. Methodology and Principal Findings (1) We generated phylogenetic trees using haplotype sequences of six low copy nuclear genes. (2) Published simple sequence repeat data were re-analyzed to study population structure and the results were compared with the phylogenetic trees constructed using sequence data and coalescence simulations. (3) To distinguish between hybridization and incomplete lineage sorting, we developed and utilized a coalescence simulation approach. In other studies, species trees have been inferred despite the possibility of hybridization having occurred and used to generate null distributions of the effect of lineage sorting alone (by coalescent simulation). Since this is problematic, we instead generate these distributions directly from observed gene trees. Of the six trees generated, we used the most resolved three to detect hybrids. We found that 11 of 33 samples appear to be affected by historical hybridization. Analysis of the remaining three genes supported the conclusions from the hybrid detection test. Conclusions We have identified or confirmed probable hybrid origins for several Citrus cultivars using three different approaches–gene phylogenies, population structure analysis and coalescence simulation. Hybridization and incomplete lineage sorting were identified primarily based on differences among gene phylogenies with reference to null expectations via coalescence simulations. We conclude that identifying hybridization as a frequent cause of incongruence among gene trees is critical to correctly infer the phylogeny among species of Citrus. PMID:23874615

  2. Improving the accuracy of protein stability predictions with multistate design using a variety of backbone ensembles.

    PubMed

    Davey, James A; Chica, Roberto A

    2014-05-01

    Multistate computational protein design (MSD) with backbone ensembles approximating conformational flexibility can predict higher quality sequences than single-state design with a single fixed backbone. However, it is currently unclear what characteristics of backbone ensembles are required for the accurate prediction of protein sequence stability. In this study, we aimed to improve the accuracy of protein stability predictions made with MSD by using a variety of backbone ensembles to recapitulate the experimentally measured stability of 85 Streptococcal protein G domain β1 sequences. Ensembles tested here include an NMR ensemble as well as those generated by molecular dynamics (MD) simulations, by Backrub motions, and by PertMin, a new method that we developed involving the perturbation of atomic coordinates followed by energy minimization. MSD with the PertMin ensembles resulted in the most accurate predictions by providing the highest number of stable sequences in the top 25, and by correctly binning sequences as stable or unstable with the highest success rate (≈90%) and the lowest number of false positives. The performance of PertMin ensembles is due to the fact that their members closely resemble the input crystal structure and have low potential energy. Conversely, the NMR ensemble as well as those generated by MD simulations at 500 or 1000 K reduced prediction accuracy due to their low structural similarity to the crystal structure. The ensembles tested herein thus represent on- or off-target models of the native protein fold and could be used in future studies to design for desired properties other than stability. Copyright © 2013 Wiley Periodicals, Inc.

  3. Mapping of Cardiac Electrical Activation with Electromechanical Wave Imaging: An in silico-in vivo Reciprocity Study

    PubMed Central

    Provost, Jean; Gurev, Viatcheslav; Trayanova, Natalia; Konofagou, Elisa E.

    2011-01-01

    Background Electromechanical Wave Imaging (EWI) is an entirely non-invasive, ultrasound-based imaging method capable of mapping the electromechanical activation sequence of the ventricles in vivo. Given the broad accessibility of ultrasound scanners in the clinic, the application of EWI could constitute a flexible surrogate for the 3D electrical activation. Objective The purpose of this report is to reproduce the electromechanical wave (EW) using an anatomically-realistic electromechanical model, and establish the capability of EWI to map the electrical activation sequence in vivo when pacing from different locations. Methods EWI was performed in one canine during pacing from three different sites. A high-resolution dynamic model of coupled cardiac electromechanics of the canine heart was used to predict the experimentally recorded electromechanical wave. The simulated 3D electrical activation sequence was then compared with the experimental EW. Results The electrical activation sequence and the EW were highly correlated for all pacing sites. The relationship between the electrical activation and the EW onset was found to be linear with a slope of 1.01 to 1.17 for different pacing schemes and imaging angles. Conclusions The accurate reproduction of the EW in simulations indicates that the model framework is capable of accurately representing the cardiac electromechanics and thus testing new hypotheses. The one-to-one correspondence between the electrical activation sequence and the EW indicates that EWI could be used to map the cardiac electrical activity. This opens the door for further exploration of the technique in assisting in the early detection, diagnosis and treatment monitoring of rhythm dysfunction. PMID:21185403

  4. Investigation of Hydrogen Embrittlement Susceptibility of X80 Weld Joints by Thermal Simulation

    NASA Astrophysics Data System (ADS)

    Peng, Huangtao; An, Teng; Zheng, Shuqi; Luo, Bingwei; Wang, Siyu; Zhang, Shuai

    2018-05-01

    The objective of this study was to investigate the hydrogen embrittlement (HE) susceptibility and influence mechanism of X80 weld joints. Slow strain rate testing (SSRT) under in situ H-charging, combined with microstructure and fracture analysis, was performed on the base metal (BM), weld metal (WM), thermally simulated fine-grained heat-affected zone (FGHAZ) and coarse-grained heat-affected zone (CGHAZ). Results showed that the WM and simulated HAZ had a greater degree of high local strain distribution than the BM; compared to the CGHAZ, the FGHAZ had lower microhardness and more uniformly distributed stress. SSRT results showed that the weld joint was highly sensitive to HE; the HE index decreased in the following sequence: FGHAZ, WM, CGHAZ and BM. The effect of the microstructure on HE was mainly reflected in microstructure, local stress distribution and microhardness.

  5. A Simulation Testbed for Airborne Merging and Spacing

    NASA Technical Reports Server (NTRS)

    Santos, Michel; Manikonda, Vikram; Feinberg, Art; Lohr, Gary

    2008-01-01

    The key innovation in this effort is the development of a simulation testbed for airborne merging and spacing (AM&S). We focus on concepts related to airports with Super Dense Operations where new airport runway configurations (e.g. parallel runways), sequencing, merging, and spacing are some of the concepts considered. We focus on modeling and simulating a complementary airborne and ground system for AM&S to increase efficiency and capacity of these high density terminal areas. From a ground systems perspective, a scheduling decision support tool generates arrival sequences and spacing requirements that are fed to the AM&S system operating on the flight deck. We enhanced NASA's Airspace Concept Evaluation Systems (ACES) software to model and simulate AM&S concepts and algorithms.

  6. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    PubMed

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  7. SU-C-17A-02: Sirius MRI Markers for Prostate Post-Implant Assessment: MR Protocol Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, T; Wang, J; Kudchadker, R

    Purpose: Currently, CT is used to visualize prostate brachytherapy sources, at the expense of accurate structure contouring. MRI is superior to CT for anatomical delineation, but the sources appear as voids on MRI images. Previously we have developed Sirius MRI markers (C4 Imaging) to replace spacers to assist source localization on MRI images. Here we develop an MRI pulse sequence protocol that enhances the signal of these markers to enable MRI-only post-implant prostate dosimetric analysis. Methods: To simulate a clinical scenario, a CIRS multi-modality prostate phantom was implanted with 66 markers and 86 sources. The implanted phantom was imaged onmore » both 1.5T and 3.0T GE scanners under various conditions, different pulse sequences (2D fast spin echo [FSE], 3D balanced steadystate free precession [bSSFP] and 3D fast spoiled gradient echo [FSPGR]), as well as varying amount of padding to simulate various patient sizes and associated signal fall-off from the surface coil elements. Standard FSE sequences from the current clinical protocols were also evaluated. Marker visibility, marker size, intra-marker distance, total scan time and artifacts were evaluated for various combinations of echo time, repetition time, flip angle, number of excitations, bandwidth, slice thickness and spacing, fieldof- view, frequency/phase encoding steps and frequency direction. Results: We have developed a 3D FSPGR pulse sequence that enhances marker signal and ensures the integrity of the marker shape while maintaining reasonable scan time. For patients contraindicated for 3.0T, we have also developed a similar sequence for 1.5T scanners. Signal fall-off with distance from prostate to coil can be compensated mainly by decreasing bandwidth. The markers are not visible using standard FSE sequences. FSPGR sequences are more robust for consistent marker visualization as compared to bSSFP sequences. Conclusion: The developed MRI pulse sequence protocol for Sirius MRI markers assists source localization to enable MRIonly post-implant prostate dosimetric analysis. S.J. Frank is a co-founder of C4 Imaging (manufactures the MRI markers)« less

  8. [Effect of simulated heavy metal leaching solution of electroplating sludge on the bioactivity of Acidithiobacillus ferrooxidans].

    PubMed

    Xie, Xin-Yuan; Sun, Pei-De; Lou, Ju-Qing; Guo, Mao-Xin; Ma, Wang-Gang

    2013-01-01

    An Acidithiobacillus ferrooxidans strain WZ-1 was isolated from the tannery sludge in Wenzhou, Zhejiang Province in China. The cell of WZ-1 strain is Gram negative and rod-shaped, its 16S rDNA sequence is closely related to that of Acidithiobacillus ferrooxidans ATCC23270 with 99% similarity. These results reveal that WZ-1 is a strain of Acidithiobacillus ferrooxidans. The effects of Ni2+, Cr3+, Cu2+, Zn2+ and 5 kinds of simulated leaching solutions of electroplating sludge on the bioactivity of Fe2+ oxidation and apparent respiratory rate of WZ-1 were investigated. The results showed that Ni2+ and Cr3+ did not have any influence on the bioactivity of WZ-1 at concentrations of 5.0 g x L(-1) and 0.1 g x L(-1), respectively. WZ-1 showed tolerance to high levels of Ni2+, Zn2+ (about 30.0 g x L(-1)), but it had lower tolerance to Cr3+ and Cu2+ (0.1 g x L(-1) Cr3+ and 2.5 g x L(-1) Cu2+). Different kinds of simulated leaching solution of electroplating sludge had significant differences in terms of their effects on the bioactivity of WZ-1 with a sequence of Cu/Ni/Cr/Zn > Cu/Ni/Zn > Cu/Cr/Zn > Cu/Ni/Cr > Ni/Cr/Zn.

  9. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  10. A Novel Low Energy Electron Microscope for DNA Sequencing and Surface Analysis

    PubMed Central

    Mankos, M.; Shadman, K.; Persson, H.H.J.; N’Diaye, A.T.; Schmid, A.K.; Davis, R.W.

    2014-01-01

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel technique that is directed towards imaging nanostructures and surfaces with sub-nanometer resolution. The technique combines a monochromator, a mirror aberration corrector, an energy filter, and dual beam illumination in a single instrument. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. Simulation results predict that the novel aberration corrector design will eliminate the second rank chromatic and third and fifth order spherical aberrations, thereby improving the resolution into the sub-nanometer regime at landing energies as low as one hundred electron-Volts. The energy filter produces a beam that can extract detailed information about the chemical composition and local electronic states of non-periodic objects such as nanoparticles, interfaces, defects, and macromolecules. The dual flood illumination eliminates charging effects that are generated when a conventional LEEM is used to image insulating specimens. A potential application for MAD-LEEM is in DNA sequencing, which requires high resolution to distinguish the individual bases and high speed to reduce the cost. The MAD-LEEM approach images the DNA with low electron impact energies, which provides nucleobase contrast mechanisms without organometallic labels. Furthermore, the micron-size field of view when combined with imaging on the fly provides long read lengths, thereby reducing the demand on assembling the sequence. Experimental results from bulk specimens with immobilized single-base oligonucleotides demonstrate that base specific contrast is available with reflected, photo-emitted, and Auger electrons. Image contrast simulations of model rectangular features mimicking the individual nucleotides in a DNA strand have been developed to translate measurements of contrast on bulk DNA to the detectability of individual DNA bases in a sequence. PMID:24524867

  11. Peptide-Directed PdAu Nanoscale Surface Segregation: Toward Controlled Bimetallic Architecture for Catalytic Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedford, Nicholas M.; Showalter, Allison R.; Woehl, Taylor J.

    Bimetallic nanoparticles are of immense scientific and technological interest given the synergistic properties observed when mixing two different metallic species at the nanoscale. This is particularly prevalent in catalysis, where bimetallic nanoparticles often exhibit improved catalytic activity and durability over their monometallic counterparts. Yet despite intense research efforts, little is understood regarding how to optimize bimetallic surface composition and structure synthetically using rational design principles. Recently, it has been demonstrated that peptide-enabled routes for nanoparticle synthesis result in materials with sequence-dependent catalytic properties, providing an opportunity for rational design through sequence manipulation. In this study, bimetallic PdAu nanoparticles are synthesizedmore » with a small set of peptides containing known Pd and Au binding motifs. The resulting nanoparticles were extensively characterized using high-resolution scanning transmission electron microscopy, X-ray absorption spectroscopy and high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Structural information obtained from synchrotron radiation methods were then used to generate model nanoparticle configurations using reverse Monte Carlo simulations, which illustrate sequence-dependence in both surface structure and surface composition. Replica exchange solute tempering molecular dynamic simulations were also used to predict the modes of peptide binding on monometallic surfaces, indicating that different sequences bind to the metal interfaces via different mechanisms. As a testbed reaction, electrocatalytic methanol oxidation experiments were performed, wherein differences in catalytic activity are clearly observed in materials with identical bimetallic composition. Finally, taken together, this study indicates that peptides could be used to arrive at bimetallic surfaces with enhanced catalytic properties, which could be leveraged for rational bimetallic nanoparticle design using peptide-enabled approaches.« less

  12. Treatment carryover impacts on effectiveness of intraocular pressure lowering agents, estimated by a discrete event simulation model.

    PubMed

    Denis, P; Le Pen, C; Umuhire, D; Berdeaux, G

    2008-01-01

    To compare the effectiveness of two treatment sequences, latanoprost-latanoprost timolol fixed combination (L-LT) versus travoprost-travoprost timolol fixed combination (T-TT), in the treatment of open-angle glaucoma (OAG) or ocular hypertension (OHT). A discrete event simulation (DES) model was constructed. Patients with either OAG or OHT were treated first-line with a prostaglandin, either latanoprost or travoprost. In case of treatment failure, patients were switched to the specific prostaglandin-timolol sequence LT or TT. Failure was defined as intraocular pressure higher than or equal to 18 mmHg at two visits. Time to failure was estimated from two randomized clinical trials. Log-rank tests were computed. Linear functions after log-log transformation were used to model time to failure. The time horizon of the model was 60 months. Outcomes included treatment failure and disease progression. Sensitivity analyses were performed. Latanoprost treatment resulted in more treatment failures than travoprost (p<0.01), and LT more than TT (p<0.01). At 60 months, the probability of starting a third treatment line was 39.2% with L-LT versus 29.9% with T-TT. On average, L-LT patients developed 0.55 new visual field defects versus 0.48 for T-TT patients. The probability of no disease progression at 60 months was 61.4% with L-LT and 65.5% with T-TT. Based on randomized clinical trial results and using a DES model, the T-TT sequence was more effective at avoiding starting a third line treatment than the L-LT sequence. T-TT treated patients developed less glaucoma progression.

  13. Peptide-Directed PdAu Nanoscale Surface Segregation: Toward Controlled Bimetallic Architecture for Catalytic Materials

    DOE PAGES

    Bedford, Nicholas M.; Showalter, Allison R.; Woehl, Taylor J.; ...

    2016-09-01

    Bimetallic nanoparticles are of immense scientific and technological interest given the synergistic properties observed when mixing two different metallic species at the nanoscale. This is particularly prevalent in catalysis, where bimetallic nanoparticles often exhibit improved catalytic activity and durability over their monometallic counterparts. Yet despite intense research efforts, little is understood regarding how to optimize bimetallic surface composition and structure synthetically using rational design principles. Recently, it has been demonstrated that peptide-enabled routes for nanoparticle synthesis result in materials with sequence-dependent catalytic properties, providing an opportunity for rational design through sequence manipulation. In this study, bimetallic PdAu nanoparticles are synthesizedmore » with a small set of peptides containing known Pd and Au binding motifs. The resulting nanoparticles were extensively characterized using high-resolution scanning transmission electron microscopy, X-ray absorption spectroscopy and high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Structural information obtained from synchrotron radiation methods were then used to generate model nanoparticle configurations using reverse Monte Carlo simulations, which illustrate sequence-dependence in both surface structure and surface composition. Replica exchange solute tempering molecular dynamic simulations were also used to predict the modes of peptide binding on monometallic surfaces, indicating that different sequences bind to the metal interfaces via different mechanisms. As a testbed reaction, electrocatalytic methanol oxidation experiments were performed, wherein differences in catalytic activity are clearly observed in materials with identical bimetallic composition. Finally, taken together, this study indicates that peptides could be used to arrive at bimetallic surfaces with enhanced catalytic properties, which could be leveraged for rational bimetallic nanoparticle design using peptide-enabled approaches.« less

  14. A novel low energy electron microscope for DNA sequencing and surface analysis.

    PubMed

    Mankos, M; Shadman, K; Persson, H H J; N'Diaye, A T; Schmid, A K; Davis, R W

    2014-10-01

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel technique that is directed towards imaging nanostructures and surfaces with sub-nanometer resolution. The technique combines a monochromator, a mirror aberration corrector, an energy filter, and dual beam illumination in a single instrument. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. Simulation results predict that the novel aberration corrector design will eliminate the second rank chromatic and third and fifth order spherical aberrations, thereby improving the resolution into the sub-nanometer regime at landing energies as low as one hundred electron-Volts. The energy filter produces a beam that can extract detailed information about the chemical composition and local electronic states of non-periodic objects such as nanoparticles, interfaces, defects, and macromolecules. The dual flood illumination eliminates charging effects that are generated when a conventional LEEM is used to image insulating specimens. A potential application for MAD-LEEM is in DNA sequencing, which requires high resolution to distinguish the individual bases and high speed to reduce the cost. The MAD-LEEM approach images the DNA with low electron impact energies, which provides nucleobase contrast mechanisms without organometallic labels. Furthermore, the micron-size field of view when combined with imaging on the fly provides long read lengths, thereby reducing the demand on assembling the sequence. Experimental results from bulk specimens with immobilized single-base oligonucleotides demonstrate that base specific contrast is available with reflected, photo-emitted, and Auger electrons. Image contrast simulations of model rectangular features mimicking the individual nucleotides in a DNA strand have been developed to translate measurements of contrast on bulk DNA to the detectability of individual DNA bases in a sequence. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. A novel low energy electron microscope for DNA sequencing and surface analysis

    DOE PAGES

    Mankos, M.; Shadman, K.; Persson, H. H. J.; ...

    2014-01-31

    Monochromatic, aberration-corrected, dual-beam low energy electron microscopy (MAD-LEEM) is a novel technique that is directed towards imaging nanostructures and surfaces with sub-nanometer resolution. The technique combines a monochromator, a mirror aberration corrector, an energy filter, and dual beam illumination in a single instrument. The monochromator reduces the energy spread of the illuminating electron beam, which significantly improves spectroscopic and spatial resolution. Simulation results predict that the novel aberration corrector design will eliminate the second rank chromatic and third and fifth order spherical aberrations, thereby improving the resolution into the sub-nanometer regime at landing energies as low as one hundred electron-Volts.more » The energy filter produces a beam that can extract detailed information about the chemical composition and local electronic states of non-periodic objects such as nanoparticles, interfaces, defects, and macromolecules. The dual flood illumination eliminates charging effects that are generated when a conventional LEEM is used to image insulating specimens. A potential application for MAD-LEEM is in DNA sequencing, which requires high resolution to distinguish the individual bases and high speed to reduce the cost. The MAD-LEEM approach images the DNA with low electron impact energies, which provides nucleobase contrast mechanisms without organometallic labels. Furthermore, the micron-size field of view when combined with imaging on the fly provides long read lengths, thereby reducing the demand on assembling the sequence. Finally, experimental results from bulk specimens with immobilized single-base oligonucleotides demonstrate that base specific contrast is available with reflected, photo-emitted, and Auger electrons. Image contrast simulations of model rectangular features mimicking the individual nucleotides in a DNA strand have been developed to translate measurements of contrast on bulk DNA to the detectability of individual DNA bases in a sequence.« less

  16. Modeling and simulation of magnetic resonance imaging based on intermolecular multiple quantum coherences

    NASA Astrophysics Data System (ADS)

    Cai, Congbo; Dong, Jiyang; Cai, Shuhui; Cheng, En; Chen, Zhong

    2006-11-01

    Intermolecular multiple quantum coherences (iMQCs) have many potential applications since they can provide interaction information between different molecules within the range of dipolar correlation distance, and can provide new contrast in magnetic resonance imaging (MRI). Because of the non-localized property of dipolar field, and the non-linear property of the Bloch equations incorporating the dipolar field term, the evolution behavior of iMQC is difficult to deduce strictly in many cases. In such cases, simulation studies are very important. Simulation results can not only give a guide to optimize experimental conditions, but also help analyze unexpected experimental results. Based on our product operator matrix and the K-space method for dipolar field calculation, the MRI simulation software was constructed, running on Windows operation system. The non-linear Bloch equations are calculated by a fifth-order Cash-Karp Runge-Kutta formulism. Computational time can be efficiently reduced by separating the effects of chemical shifts and strong gradient field. Using this software, simulation of different kinds of complex MRI sequences can be done conveniently and quickly on general personal computers. Some examples were given. The results were discussed.

  17. Connecting Common Genetic Polymorphisms to Protein Function: A Modular Project Sequence for Lecture or Lab

    ERIC Educational Resources Information Center

    Berndsen, Christopher E.; Young, Byron H.; McCormick, Quinlin J.; Enke, Raymond A.

    2016-01-01

    Single nucleotide polymorphisms (SNPs) in DNA can result in phenotypes where the biochemical basis may not be clear due to the lack of protein structures. With the growing number of modeling and simulation software available on the internet, students can now participate in determining how small changes in genetic information impact cellular…

  18. Theoretical Determination of the Lift of a Simulated Ejector Wing.

    DTIC Science & Technology

    1982-12-01

    at n different values of X. This is easily done and the values which result are labeled Nupper and Nlower. Equations (25) and (26) then become: Fx -i...Foriegn Technology Division. He began studies at AFIT as a part-time student in April 1979 in the Aero/ Astro Department; pursuing the sequences of Air

  19. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  20. Indirect MRI of 17 o-labeled water using steady-state sequences: Signal simulation and preclinical experiment.

    PubMed

    Kudo, Kohsuke; Harada, Taisuke; Kameda, Hiroyuki; Uwano, Ikuko; Yamashita, Fumio; Higuchi, Satomi; Yoshioka, Kunihiro; Sasaki, Makoto

    2018-05-01

    Few studies have been reported for T 2 -weighted indirect 17 O imaging. To evaluate the feasibility of steady-state sequences for indirect 17 O brain imaging. Signal simulation, phantom measurements, and prospective animal experiments were performed in accordance with the institutional guidelines for animal experiments. Signal simulations of balanced steady-state free precession (bSSFP) were performed for concentrations of 17 O ranging from 0.037-1.600%. Phantom measurements with concentrations of 17 O water ranging from 0.037-1.566% were also conducted. Six healthy beagle dogs were scanned with intravenous administration of 20% 17 O-labeled water (1 mL/kg). Dynamic 3D-bSSFP scans were performed at 3T MRI. 17 O-labeled water was injected 60 seconds after the scan start, and the total scan duration was 5 minutes. Based on the result of signal simulation and phantom measurement, signal changes in the beagle dogs were measured and converted into 17 O concentrations. The 17 O concentrations were averaged for every 15 seconds, and compared to the baseline (30-45 sec) with Dunnett's multiple comparison tests. Signal simulation revealed that the relationships between 17 O concentration and the natural logarithm of relative signals were linear. The intraclass correlation coefficient between relative signals in phantom measurement and signal simulations was 0.974. In the animal experiments, significant increases in 17 O concentration (P < 0.05) were observed 60 seconds after the injection of 17 O. At the end of scanning, mean respective 17 O concentrations of 0.084 ± 0.026%, 0.117 ± 0.038, 0.082 ± 0.037%, and 0.049 ± 0.004% were noted for the cerebral cortex, cerebellar cortex, cerebral white matter, and ventricle. Dynamic steady-state sequences were feasible for indirect 17 O imaging, and absolute quantification was possible. This method can be applied for the measurement of permeability and blood flow in the brain, and for kinetic analysis of cerebrospinal fluid. 2 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:1373-1379. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Navigating the tip of the genomic iceberg: Next-generation sequencing for plant systematics.

    PubMed

    Straub, Shannon C K; Parks, Matthew; Weitemier, Kevin; Fishbein, Mark; Cronn, Richard C; Liston, Aaron

    2012-02-01

    Just as Sanger sequencing did more than 20 years ago, next-generation sequencing (NGS) is poised to revolutionize plant systematics. By combining multiplexing approaches with NGS throughput, systematists may no longer need to choose between more taxa or more characters. Here we describe a genome skimming (shallow sequencing) approach for plant systematics. Through simulations, we evaluated optimal sequencing depth and performance of single-end and paired-end short read sequences for assembly of nuclear ribosomal DNA (rDNA) and plastomes and addressed the effect of divergence on reference-guided plastome assembly. We also used simulations to identify potential phylogenetic markers from low-copy nuclear loci at different sequencing depths. We demonstrated the utility of genome skimming through phylogenetic analysis of the Sonoran Desert clade (SDC) of Asclepias (Apocynaceae). Paired-end reads performed better than single-end reads. Minimum sequencing depths for high quality rDNA and plastome assemblies were 40× and 30×, respectively. Divergence from the reference significantly affected plastome assembly, but relatively similar references are available for most seed plants. Deeper rDNA sequencing is necessary to characterize intragenomic polymorphism. The low-copy fraction of the nuclear genome was readily surveyed, even at low sequencing depths. Nearly 160000 bp of sequence from three organelles provided evidence of phylogenetic incongruence in the SDC. Adoption of NGS will facilitate progress in plant systematics, as whole plastome and rDNA cistrons, partial mitochondrial genomes, and low-copy nuclear markers can now be efficiently obtained for molecular phylogenetics studies.

  2. Development of metamodels for predicting aerosol dispersion in ventilated spaces

    NASA Astrophysics Data System (ADS)

    Hoque, Shamia; Farouk, Bakhtier; Haas, Charles N.

    2011-04-01

    Artificial neural network (ANN) based metamodels were developed to describe the relationship between the design variables and their effects on the dispersion of aerosols in a ventilated space. A Hammersley sequence sampling (HSS) technique was employed to efficiently explore the multi-parameter design space and to build numerical simulation scenarios. A detailed computational fluid dynamics (CFD) model was applied to simulate these scenarios. The results derived from the CFD simulations were used to train and test the metamodels. Feed forward ANN's were developed to map the relationship between the inputs and the outputs. The predictive ability of the neural network based metamodels was compared to linear and quadratic metamodels also derived from the same CFD simulation results. The ANN based metamodel performed well in predicting the independent data sets including data generated at the boundaries. Sensitivity analysis showed that particle tracking time to residence time and the location of input and output with relation to the height of the room had more impact than the other dimensionless groups on particle behavior.

  3. Three Dimensional Simulation of the Baneberry Nuclear Event

    NASA Astrophysics Data System (ADS)

    Lomov, Ilya N.; Antoun, Tarabay H.; Wagoner, Jeff; Rambo, John T.

    2004-07-01

    Baneberry, a 10-kiloton nuclear event, was detonated at a depth of 278 m at the Nevada Test Site on December 18, 1970. Shortly after detonation, radioactive gases emanating from the cavity were released into the atmosphere through a shock-induced fissure near surface ground zero. Extensive geophysical investigations, coupled with a series of 1D and 2D computational studies were used to reconstruct the sequence of events that led to the catastrophic failure. However, the geological profile of the Baneberry site is complex and inherently three-dimensional, which meant that some geological features had to be simplified or ignored in the 2D simulations. This left open the possibility that features unaccounted for in the 2D simulations could have had an important influence on the eventual containment failure of the Baneberry event. This paper presents results from a high-fidelity 3D Baneberry simulation based on the most accurate geologic and geophysical data available. The results are compared with available data, and contrasted against the results of the previous 2D computational studies.

  4. Computational analysis for selectivity of histone deacetylase inhibitor by replica-exchange umbrella sampling molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Shuichiro; Sakae, Yoshitake; Itoh, Yukihiro; Suzuki, Takayoshi; Okamoto, Yuko

    2018-03-01

    We performed protein-ligand docking simulations with a ligand T247, which has been reported as a selective inhibitor of a histone deacetylase HDAC3, by the replica-exchange umbrella sampling method in order to estimate the free energy profiles along ligand docking pathways of HDAC3-T247 and HDAC2-T247 systems. The simulation results showed that the docked state of the HDAC3-T247 system is more stable than that of the HDAC2-T247 system although the amino-acid sequences and structures of HDAC3 and HDAC2 are very similar. By comparing structures obtained from the simulations of both systems, we found the difference between structures of hydrophobic residues at the entrance of the catalytic site. Moreover, we performed conventional molecular dynamics simulations of HDAC3 and HDAC2 systems without T247, and the results also showed the same difference of the hydrophobic structures. Therefore, we consider that this hydrophobic structure contributes to the stabilization of the docked state of the HDAC3-T247 system. Furthermore, we show that Tyr209, which is one of the hydrophobic residues in HDAC2, plays a key role in the instability from the simulation results of a mutated-HDAC2 system.

  5. Taurus II Stage Test Simulations: Using Large-Scale CFD Simulations to Provide Critical Insight into Plume Induced Environments During Design

    NASA Technical Reports Server (NTRS)

    Struzenberg, L. L.; West, J. S.

    2011-01-01

    This paper describes the use of targeted Loci/CHEM CFD simulations to evaluate the effects of a dual-engine first-stage hot-fire test on an evolving integrated launch pad/test article design. This effort was undertaken as a part of the NESC Independent Assessment of the Taurus II Stage Test Series. The underlying conceptual model included development of a series of computational models and simulations to analyze the plume induced environments on the pad, facility structures and test article. A pathfinder simulation was first developed, capable of providing quick-turn around evaluation of plume impingement pressures on the flame deflector. Results from this simulation were available in time to provide data for an ongoing structural assessment of the deflector. The resulting recommendation was available in a timely manner and was incorporated into construction schedule for the new launch stand under construction at Wallops Flight Facility. A series of Reynolds-Averaged Navier-Stokes (RANS) quasi-steady simulations representative of various key elements of the test profile was performed to identify potential concerns with the test configuration and test profile. As required, unsteady Hybrid-RANS/LES simulations were performed, to provide additional insight into critical aspects of the test sequence. Modifications to the test-specific hardware and facility structures thermal protection as well as modifications to the planned hot-fire test profile were implemented based on these simulation results.

  6. Probabilistic Evaluation of Competing Climate Models

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Chatterjee, S.; Heyman, M.; Cressie, N.

    2017-12-01

    A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. Here, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data, over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. We compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set, as an illustration.

  7. Automated design evolution of stereochemically randomized protein foldamers

    NASA Astrophysics Data System (ADS)

    Ranbhor, Ranjit; Kumar, Anil; Patel, Kirti; Ramakrishnan, Vibin; Durani, Susheel

    2018-05-01

    Diversification of chain stereochemistry opens up the possibilities of an ‘in principle’ increase in the design space of proteins. This huge increase in the sequence and consequent structural variation is aimed at the generation of smart materials. To diversify protein structure stereochemically, we introduced L- and D-α-amino acids as the design alphabet. With a sequence design algorithm, we explored the usage of specific variables such as chirality and the sequence of this alphabet in independent steps. With molecular dynamics, we folded stereochemically diverse homopolypeptides and evaluated their ‘fitness’ for possible design as protein-like foldamers. We propose a fitness function to prune the most optimal fold among 1000 structures simulated with an automated repetitive simulated annealing molecular dynamics (AR-SAMD) approach. The highly scored poly-leucine fold with sequence lengths of 24 and 30 amino acids were later sequence-optimized using a Dead End Elimination cum Monte Carlo based optimization tool. This paper demonstrates a novel approach for the de novo design of protein-like foldamers.

  8. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  9. Learning of Chunking Sequences in Cognition and Behavior

    PubMed Central

    Rabinovich, Mikhail

    2015-01-01

    We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson’s disease and Schizophrenia. PMID:26584306

  10. Sedimentary modeling and analysis of petroleum system of the upper Tertiary sequences in southern Ulleung sedimentary Basin, East Sea (Sea of Japan)

    NASA Astrophysics Data System (ADS)

    Cheong, D.; Kim, D.; Kim, Y.

    2010-12-01

    The block 6-1 located in the southwestern margin of the Ulleung basin, East Sea (Sea of Japan) is an area where recently produces commercial natural gas and condensate. A total of 17 exploratory wells have been drilled, and also many seismic explorations have been carried out since early 1970s. Among the wells and seismic sections, the Gorae 1 well and a seismic section through the Gorae 1-2 well were chosen for this simulation work. Then, a 2-D graphic simulation using SEDPAK elucidates the evolution, burial history and diagenesis of the sedimentary sequence. The study area is a suitable place for modeling a petroleum system and evaluating hydrocarbon potential of reservoir. Shale as a source rock is about 3500m deep from sea floor, and sandstones interbedded with thin mud layers are distributed as potential reservoir rocks from 3,500m to 2,000m deep. On top of that, shales cover as seal rocks and overburden rocks upto 900m deep. Input data(sea level, sediment supply, subsidence rate, etc) for the simulation was taken from several previous published papers including the well and seismic data, and the thermal maturity of the sediment was calculated from known thermal gradient data. In this study area, gas and condensate have been found and commercially produced, and the result of the simulation also shows that there is a gas window between 4000m and 6000m deep, so that three possible interpretations can be inferred from the simulation result. First, oil has already moved and gone to the southeastern area along uplifting zones. Or second, oil has never been generated because organic matter is kerogen type 3, and or finally, generated oil has been converted into gas by thermally overcooking. SEDPAK has an advantage that it provides the timing and depth information of generated oil and gas with TTI values even though it has a limit which itself can not perform geochemical modeling to analyze thermal maturity level of source rocks. Based on the result of our simulation, added exploratory wells are required to discover deeper gas located in the study area.

  11. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  12. Improve homology search sensitivity of PacBio data by correcting frameshifts.

    PubMed

    Du, Nan; Sun, Yanni

    2016-09-01

    Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Electrostatic Discharge Test of Multi-Junction Solar Array Coupons After Combined Space Environmental Exposures

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H.; Schneider, Todd; Vaughn, Jason; Hoang, Bao; Funderburk, Victor V.; Wong, Frankie; Gardiner, George

    2010-01-01

    A set of multi-junction GaAs/Ge solar array test coupons were subjected to a sequence of 5-year increments of combined environmental exposure tests. The test coupons capture an integrated design intended for use in a geosynchronous (GEO) space environment. A key component of this test campaign is conducting electrostatic discharge (ESD) tests in the inverted gradient mode. The protocol of the ESD tests is based on the ISO/CD 11221, the ISO standard for ESD testing on solar array panels. This standard is currently in its final review with expected approval in 2010. The test schematic in the ISO reference has been modified with Space System/Loral designed circuitry to better simulate the on-orbit operational conditions of its solar array design. Part of the modified circuitry is to simulate a solar array panel coverglass flashover discharge. All solar array coupons used in the test campaign consist of 4 cells. The ESD tests are performed at the beginning of life (BOL) and at each 5-year environment exposure point. The environmental exposure sequence consists of UV radiation, electron/proton particle radiation, thermal cycling, and ion thruster plume. This paper discusses the coverglass flashover simulation, ESD test setup, and the importance of the electrical test design in simulating the on-orbit operational conditions. Results from 5th-year testing are compared to the baseline ESD characteristics determined at the BOL condition.

  14. Computational Evaluation of the Strict Master and Random Template Models of Endogenous Retrovirus Evolution

    PubMed Central

    Nascimento, Fabrícia F.; Rodrigo, Allen G.

    2016-01-01

    Transposable elements (TEs) are DNA sequences that are able to replicate and move within and between host genomes. Their mechanism of replication is also shared with endogenous retroviruses (ERVs), which are also a type of TE that represent an ancient retroviral infection within animal genomes. Two models have been proposed to explain TE proliferation in host genomes: the strict master model (SMM), and the random template (or transposon) model (TM). In SMM only a single copy of a given TE lineage is able to replicate, and all other genomic copies of TEs are derived from that master copy. In TM, any element of a given family is able to replicate in the host genome. In this paper, we simulated ERV phylogenetic trees under variations of SMM and TM. To test whether current phylogenetic programs can recover the simulated ERV phylogenies, DNA sequence alignments were simulated and maximum likelihood trees were reconstructed and compared to the simulated phylogenies. Results indicate that visual inspection of phylogenetic trees alone can be misleading. However, if a set of statistical summaries is calculated, we are able to distinguish between models with high accuracy by using a data mining algorithm that we introduce here. We also demonstrate the use of our data mining algorithm with empirical data for the porcine endogenous retrovirus (PERV), an ERV that is able to replicate in human and pig cells in vitro. PMID:27649303

  15. RNA-Seq Alignment to Individualized Genomes Improves Transcript Abundance Estimates in Multiparent Populations

    PubMed Central

    Munger, Steven C.; Raghupathy, Narayanan; Choi, Kwangbom; Simons, Allen K.; Gatti, Daniel M.; Hinerfeld, Douglas A.; Svenson, Karen L.; Keller, Mark P.; Attie, Alan D.; Hibbs, Matthew A.; Graber, Joel H.; Chesler, Elissa J.; Churchill, Gary A.

    2014-01-01

    Massively parallel RNA sequencing (RNA-seq) has yielded a wealth of new insights into transcriptional regulation. A first step in the analysis of RNA-seq data is the alignment of short sequence reads to a common reference genome or transcriptome. Genetic variants that distinguish individual genomes from the reference sequence can cause reads to be misaligned, resulting in biased estimates of transcript abundance. Fine-tuning of read alignment algorithms does not correct this problem. We have developed Seqnature software to construct individualized diploid genomes and transcriptomes for multiparent populations and have implemented a complete analysis pipeline that incorporates other existing software tools. We demonstrate in simulated and real data sets that alignment to individualized transcriptomes increases read mapping accuracy, improves estimation of transcript abundance, and enables the direct estimation of allele-specific expression. Moreover, when applied to expression QTL mapping we find that our individualized alignment strategy corrects false-positive linkage signals and unmasks hidden associations. We recommend the use of individualized diploid genomes over reference sequence alignment for all applications of high-throughput sequencing technology in genetically diverse populations. PMID:25236449

  16. Universality of long-range correlations in expansion randomization systems

    NASA Astrophysics Data System (ADS)

    Messer, P. W.; Lässig, M.; Arndt, P. F.

    2005-10-01

    We study the stochastic dynamics of sequences evolving by single-site mutations, segmental duplications, deletions, and random insertions. These processes are relevant for the evolution of genomic DNA. They define a universality class of non-equilibrium 1D expansion-randomization systems with generic stationary long-range correlations in a regime of growing sequence length. We obtain explicitly the two-point correlation function of the sequence composition and the distribution function of the composition bias in sequences of finite length. The characteristic exponent χ of these quantities is determined by the ratio of two effective rates, which are explicitly calculated for several specific sequence evolution dynamics of the universality class. Depending on the value of χ, we find two different scaling regimes, which are distinguished by the detectability of the initial composition bias. All analytic results are accurately verified by numerical simulations. We also discuss the non-stationary build-up and decay of correlations, as well as more complex evolutionary scenarios, where the rates of the processes vary in time. Our findings provide a possible example for the emergence of universality in molecular biology.

  17. Image correlation method for DNA sequence alignment.

    PubMed

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  18. Mechanical features of various silkworm crystalline considering hydration effect via molecular dynamics simulations.

    PubMed

    Kim, Yoonjung; Lee, Myeongsang; Choi, Hyunsung; Baek, Inchul; Kim, Jae In; Na, Sungsoo

    2018-04-01

    Silk materials are receiving significant attention as base materials for various functional nanomaterials and nanodevices, due to its exceptionally high mechanical properties, biocompatibility, and degradable characteristics. Although crystalline silk regions are composed of various repetitive motifs with differing amino acid sequences, how the effect of humidity works differently on each of the motifs and their structural characteristics remains unclear. We report molecular dynamics (MD) simulations on various silkworm fibroins composed of major motifs (i.e. (GAGAGS) n , (GAGAGA) n , and (GAGAGY) n ) at varying degrees of hydration, and reveal how each major motifs of silk fibroins change at each degrees of hydration using MD simulations and their structural properties in mechanical perspective via steered molecular dynamics simulations. Our results explain what effects humidity can have on nanoscale materials and devices consisting of crystalline silk materials.

  19. Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery

    NASA Astrophysics Data System (ADS)

    Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.

    2017-05-01

    In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.

  20. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  1. SU-E-J-217: Multiparametric MR Imaging of Cranial Tumors On a Dedicated 1.0T MR Simulator Prior to Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Glide-Hurst, C; Liu, M

    Purpose: Quantitative magnetic resonance imaging (MRI) of cranial lesions prior to stereotactic radiosurgery (SRS) may improve treatment planning and provide potential prognostic value. The practicality and logistics of acquiring advanced multiparametric MRI sequences to measure vascular and cellular properties of cerebral tumors are explored on a 1.0 Tesla MR Simulator. Methods: MR simulation was performed immediately following routine CT simulation on a 1T MR Simulator. MR sequences used were in the order they were performed: T2-Weighted Turbo Spin Echo (T2W-TSE), T2 FLAIR, Diffusion-weighted (DWI, b = 0, 800 to generate an apparent diffusion coefficient (ADC) map), 3D T1-Weighted Fast Fieldmore » Echo (T1W-FFE), Dynamic Contrast Enhanced (DCE) and Post Gadolinium Contrast Enhanced 3D T1W-FFE images. T1 pre-contrast values was generated by acquiring six different flip angles. The arterial input function was derived from arterial pixels in the perfusion images selected manually. The extended Tofts model was used to generate the permeability maps. Routine MRI scans took about 30 minutes to complete; the additional scans added 12 minutes. Results: To date, seven patients with cerebral tumors have been imaged and tumor physiology characterized. For example, on a glioblastoma patient, the volume contoured on T1 Gd images, ADC map and the pharmacokinetic map (Ktrans) were 1.9, 1.4, and 1.5 cc respectively with strong spatial correlation. The mean ADC value of the entire volume was 1141 μm2/s while the value in the white matter was 811 μm2/s. The mean value of Ktrans was 0.02 min-1 in the tumor volume and 0.00 in the normal white matter. Conclusion: Our initial results suggest that multiparametric MRI sequences may provide a more quantitative evaluation of vascular and tumor properties. Implementing functional imaging during MR-SIM may be particularly beneficial in assessing tumor extent, differentiating radiation necrosis from tumor recurrence, and establishing reliable bio-markers for treatment response evaluation. The Department of Radiation Oncology at Henry Ford Health System has research agreement with Varian Medical System and Philips Health Care.« less

  2. Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.

    PubMed

    Cho, Soobum; Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.

  3. Effects of temperature and mass conservation on the typical chemical sequences of hydrogen oxidation

    NASA Astrophysics Data System (ADS)

    Nicholson, Schuyler B.; Alaghemandi, Mohammad; Green, Jason R.

    2018-01-01

    Macroscopic properties of reacting mixtures are necessary to design synthetic strategies, determine yield, and improve the energy and atom efficiency of many chemical processes. The set of time-ordered sequences of chemical species are one representation of the evolution from reactants to products. However, only a fraction of the possible sequences is typical, having the majority of the joint probability and characterizing the succession of chemical nonequilibrium states. Here, we extend a variational measure of typicality and apply it to atomistic simulations of a model for hydrogen oxidation over a range of temperatures. We demonstrate an information-theoretic methodology to identify typical sequences under the constraints of mass conservation. Including these constraints leads to an improved ability to learn the chemical sequence mechanism from experimentally accessible data. From these typical sequences, we show that two quantities defining the variational typical set of sequences—the joint entropy rate and the topological entropy rate—increase linearly with temperature. These results suggest that, away from explosion limits, data over a narrow range of thermodynamic parameters could be sufficient to extrapolate these typical features of combustion chemistry to other conditions.

  4. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    PubMed Central

    Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.

    2017-01-01

    Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187

  5. Resolution enhancement using a new multiple-pulse decoupling sequence for quadrupolar nuclei.

    PubMed

    Delevoye, L; Trébosc, J; Gan, Z; Montagne, L; Amoureux, J-P

    2007-05-01

    A new decoupling composite pulse sequence is proposed to remove the broadening on spin S=1/2 magic-angle spinning (MAS) spectra arising from the scalar coupling with a quadrupolar nucleus I. It is illustrated on the (31)P spectrum of an aluminophosphate, AlPO(4)-14, which is broadened by the presence of (27)Al/(31)P scalar couplings. The multiple-pulse (MP) sequence has the advantage over the continuous wave (CW) irradiation to efficiently annul the scalar dephasing without reintroducing the dipolar interaction. The MP decoupling sequence is first described in a rotor-synchronised version (RS-MP) where one parameter only needs to be adjusted. It clearly avoids the dipolar recoupling in order to achieve a better resolution than using the CW sequence. In a second improved version, the MP sequence is experimentally studied in the vicinity of the perfect rotor-synchronised conditions. The linewidth at half maximum (FWHM) of 65 Hz using (27)Al CW decoupling decreases to 48 Hz with RS-MP decoupling and to 30 Hz with rotor-asynchronised MP (RA-MP) decoupling. The main phenomena are explained using both experimental results and numerical simulations.

  6. Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink

    PubMed Central

    Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376

  7. Rapid evolution of cis-regulatory sequences via local point mutations

    NASA Technical Reports Server (NTRS)

    Stone, J. R.; Wray, G. A.

    2001-01-01

    Although the evolution of protein-coding sequences within genomes is well understood, the same cannot be said of the cis-regulatory regions that control transcription. Yet, changes in gene expression are likely to constitute an important component of phenotypic evolution. We simulated the evolution of new transcription factor binding sites via local point mutations. The results indicate that new binding sites appear and become fixed within populations on microevolutionary timescales under an assumption of neutral evolution. Even combinations of two new binding sites evolve very quickly. We predict that local point mutations continually generate considerable genetic variation that is capable of altering gene expression.

  8. Pooling across cells to normalize single-cell RNA sequencing data with many zero counts.

    PubMed

    Lun, Aaron T L; Bach, Karsten; Marioni, John C

    2016-04-27

    Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.

  9. Stepwise detection of recombination breakpoints in sequence alignments.

    PubMed

    Graham, Jinko; McNeney, Brad; Seillier-Moiseiwitsch, Françoise

    2005-03-01

    We propose a stepwise approach to identify recombination breakpoints in a sequence alignment. The approach can be applied to any recombination detection method that uses a permutation test and provides estimates of breakpoints. We illustrate the approach by analyses of a simulated dataset and alignments of real data from HIV-1 and human chromosome 7. The presented simulation results compare the statistical properties of one-step and two-step procedures. More breakpoints are found with a two-step procedure than with a single application of a given method, particularly for higher recombination rates. At higher recombination rates, the additional breakpoints were located at the cost of only a slight increase in the number of falsely declared breakpoints. However, a large proportion of breakpoints still go undetected. A makefile and C source code for phylogenetic profiling and the maximum chi2 method, tested with the gcc compiler on Linux and WindowsXP, are available at http://stat-db.stat.sfu.ca/stepwise/ jgraham@stat.sfu.ca.

  10. A Population Study of Wide-Separation Brown Dwarf Companions to Main Sequence Stars

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey J.

    2005-01-01

    Increased interest in infrared astronomy has opened the frontier to study cooler objects that shed significant light on the formation of planetary systems. Brown dwarf research provides a wealth of information useful for sorting through a myriad of proposed formation theories. Our study combines observational data from 2MASS with rigorous computer simulations to estimate the true population of long-range (greater than 1000 AU) brown dwarf companions in the solar neighborhood (less than 25 pc from Earth). Expanding on Gizis et al. (2001), we have found the margin of error in previous estimates to be significantly underestimated after we included orbit eccentricity, longitude of pericenter, angle of inclination, field star density, and primary and secondary luminosities as parameters influencing the companion systems in observational studies. We apply our simulation results to current L- and T-dwarf catalogs to provide updated estimates on the frequency of wide-separation brown dwarf companions to main sequence stars.

  11. Hydrodynamic models for novae with ejecta rich in oxygen, neon and magnesium

    NASA Technical Reports Server (NTRS)

    Starrfield, S.; Sparks, W. M.; Truran, J. W.

    1985-01-01

    The characteristics of a new class of novae are identified and explained. This class consists of those objects that have been observed to eject material rich in oxygen, neon, magnesium, and aluminum at high velocities. We propose that for this class of novae the outburst is occurring not on a carbon-oxygen white dwarf but on an oxygen-neon-magnesium white dwarf which has evolved from a star which had a main sequence mass of approx. 8 solar masses to approx. 12 solar masses. An outburst was simulated by evolving 1.25 solar mass white dwarfs accreting hydrogen rich material at various rates. The effective enrichment of the envelope by ONeMg material from the core is simulated by enhancing oxygen in the accreted layers. The resulting evolutionary sequences can eject the entire accreted envelope plus core material at high velocities. They can also become super-Eddington at maximum bolometric luminosity. The expected frequency of such events (approx. 1/4) is in good agreement with the observed numbers of these novae.

  12. A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.

    PubMed

    Lee, I; Sikora, R; Shaw, M J

    1997-01-01

    Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.

  13. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    PubMed

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  14. Probing energetics of Abeta fibril elongation by molecular dynamics simulations.

    PubMed

    Takeda, Takako; Klimov, Dmitri K

    2009-06-03

    Using replica exchange molecular dynamics simulations and an all-atom implicit solvent model, we probed the energetics of Abeta(10-40) fibril growth. The analysis of the interactions between incoming Abeta peptides and the fibril led us to two conclusions. First, considerable variations in fibril binding propensities are observed along the Abeta sequence. The peptides in the fibril and those binding to its edge interact primarily through their N-termini. Therefore, the mutations affecting the Abeta positions 10-23 are expected to have the largest impact on fibril elongation compared with those occurring in the C-terminus and turn. Second, we performed weak perturbations of the binding free energy landscape by scanning partial deletions of side-chain interactions at various Abeta sequence positions. The results imply that strong side-chain interactions--in particular, hydrophobic contacts--impede fibril growth by favoring disordered docking of incoming peptides. Therefore, fibril elongation may be promoted by moderate reduction of Abeta hydrophobicity. The comparison with available experimental data is presented.

  15. Choice of Reference Sequence and Assembler for Alignment of Listeria monocytogenes Short-Read Sequence Data Greatly Influences Rates of Error in SNP Analyses

    PubMed Central

    Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco

    2014-01-01

    The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should test a variety of conditions to achieve optimal results. PMID:25144537

  16. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  17. High temporal resolution dynamic contrast-enhanced MRI using compressed sensing-combined sequence in quantitative renal perfusion measurement.

    PubMed

    Chen, Bin; Zhao, Kai; Li, Bo; Cai, Wenchao; Wang, Xiaoying; Zhang, Jue; Fang, Jing

    2015-10-01

    To demonstrate the feasibility of the improved temporal resolution by using compressed sensing (CS) combined imaging sequence in dynamic contrast-enhanced MRI (DCE-MRI) of kidney, and investigate its quantitative effects on renal perfusion measurements. Ten rabbits were included in the accelerated scans with a CS-combined 3D pulse sequence. To evaluate the image quality, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were compared between the proposed CS strategy and the conventional full sampling method. Moreover, renal perfusion was estimated by using the separable compartmental model in both CS simulation and realistic CS acquisitions. The CS method showed DCE-MRI images with improved temporal resolution and acceptable image contrast, while presenting significantly higher SNR than the fully sampled images (p<.01) at 2-, 3- and 4-X acceleration. In quantitative measurements, renal perfusion results were in good agreement with the fully sampled one (concordance correlation coefficient=0.95, 0.91, 0.88) at 2-, 3- and 4-X acceleration in CS simulation. Moreover, in realistic acquisitions, the estimated perfusion by the separable compartmental model exhibited no significant differences (p>.05) between each CS-accelerated acquisition and the full sampling method. The CS-combined 3D sequence could improve the temporal resolution for DCE-MRI in kidney while yielding diagnostically acceptable image quality, and it could provide effective measurements of renal perfusion. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. N-body simulations of collective effects in spiral and barred galaxies

    NASA Astrophysics Data System (ADS)

    Zhang, X.

    2016-10-01

    We present gravitational N-body simulations of the secular morphological evolution of disk galaxies induced by density wave modes. In particular, we address the demands collective effects place on the choice of simulation parameters, and show that the common practice of the use of a large gravity softening parameter was responsible for the failure of past simulations to correctly model the secular evolution process in galaxies, even for those simulations where the choice of basic state allows an unstable mode to emerge, a prerequisite for obtaining the coordinated radial mass flow pattern needed for secular evolution of galaxies along the Hubble sequence. We also demonstrate that the secular evolution rates measured in our improved simulations agree to an impressive degree with the corresponding rates predicted by the recently-advanced theories of dynamically-driven secular evolution of galaxies. The results of the current work, besides having direct implications on the cosmological evolution of galaxies, also shed light on the general question of how irreversibility emerges from a nominally reversible physical system.

  19. Efficient experimental design and analysis strategies for the detection of differential expression using RNA-Sequencing.

    PubMed

    Robles, José A; Qureshi, Sumaira E; Stephen, Stuart J; Wilson, Susan R; Burden, Conrad J; Taylor, Jennifer M

    2012-09-17

    RNA sequencing (RNA-Seq) has emerged as a powerful approach for the detection of differential gene expression with both high-throughput and high resolution capabilities possible depending upon the experimental design chosen. Multiplex experimental designs are now readily available, these can be utilised to increase the numbers of samples or replicates profiled at the cost of decreased sequencing depth generated per sample. These strategies impact on the power of the approach to accurately identify differential expression. This study presents a detailed analysis of the power to detect differential expression in a range of scenarios including simulated null and differential expression distributions with varying numbers of biological or technical replicates, sequencing depths and analysis methods. Differential and non-differential expression datasets were simulated using a combination of negative binomial and exponential distributions derived from real RNA-Seq data. These datasets were used to evaluate the performance of three commonly used differential expression analysis algorithms and to quantify the changes in power with respect to true and false positive rates when simulating variations in sequencing depth, biological replication and multiplex experimental design choices. This work quantitatively explores comparisons between contemporary analysis tools and experimental design choices for the detection of differential expression using RNA-Seq. We found that the DESeq algorithm performs more conservatively than edgeR and NBPSeq. With regard to testing of various experimental designs, this work strongly suggests that greater power is gained through the use of biological replicates relative to library (technical) replicates and sequencing depth. Strikingly, sequencing depth could be reduced as low as 15% without substantial impacts on false positive or true positive rates.

  20. Evaluating Variant Calling Tools for Non-Matched Next-Generation Sequencing Data

    NASA Astrophysics Data System (ADS)

    Sandmann, Sarah; de Graaf, Aniek O.; Karimi, Mohsen; van der Reijden, Bert A.; Hellström-Lindberg, Eva; Jansen, Joop H.; Dugas, Martin

    2017-02-01

    Valid variant calling results are crucial for the use of next-generation sequencing in clinical routine. However, there are numerous variant calling tools that usually differ in algorithms, filtering strategies, recommendations and thus, also in the output. We evaluated eight open-source tools regarding their ability to call single nucleotide variants and short indels with allelic frequencies as low as 1% in non-matched next-generation sequencing data: GATK HaplotypeCaller, Platypus, VarScan, LoFreq, FreeBayes, SNVer, SAMtools and VarDict. We analysed two real datasets from patients with myelodysplastic syndrome, covering 54 Illumina HiSeq samples and 111 Illumina NextSeq samples. Mutations were validated by re-sequencing on the same platform, on a different platform and expert based review. In addition we considered two simulated datasets with varying coverage and error profiles, covering 50 samples each. In all cases an identical target region consisting of 19 genes (42,322 bp) was analysed. Altogether, no tool succeeded in calling all mutations. High sensitivity was always accompanied by low precision. Influence of varying coverages- and background noise on variant calling was generally low. Taking everything into account, VarDict performed best. However, our results indicate that there is a need to improve reproducibility of the results in the context of multithreading.

  1. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  2. Quadrupedal locomotor simulation: producing more realistic gaits using dual-objective optimization

    PubMed Central

    Hirasaki, Eishi

    2018-01-01

    In evolutionary biomechanics it is often considered that gaits should evolve to minimize the energetic cost of travelling a given distance. In gait simulation this goal often leads to convincing gait generation. However, as the musculoskeletal models used get increasingly sophisticated, it becomes apparent that such a single goal can lead to extremely unrealistic gait patterns. In this paper, we explore the effects of requiring adequate lateral stability and show how this increases both energetic cost and the realism of the generated walking gait in a high biofidelity chimpanzee musculoskeletal model. We also explore the effects of changing the footfall sequences in the simulation so it mimics both the diagonal sequence walking gaits that primates typically use and also the lateral sequence walking gaits that are much more widespread among mammals. It is apparent that adding a lateral stability criterion has an important effect on the footfall phase relationship, suggesting that lateral stability may be one of the key drivers behind the observed footfall sequences in quadrupedal gaits. The observation that single optimization goals are no longer adequate for generating gait in current models has important implications for the use of biomimetic virtual robots to predict the locomotor patterns in fossil animals. PMID:29657790

  3. DnaSAM: Software to perform neutrality testing for large datasets with complex null models.

    PubMed

    Eckert, Andrew J; Liechty, John D; Tearse, Brandon R; Pande, Barnaly; Neale, David B

    2010-05-01

    Patterns of DNA sequence polymorphisms can be used to understand the processes of demography and adaptation within natural populations. High-throughput generation of DNA sequence data has historically been the bottleneck with respect to data processing and experimental inference. Advances in marker technologies have largely solved this problem. Currently, the limiting step is computational, with most molecular population genetic software allowing a gene-by-gene analysis through a graphical user interface. An easy-to-use analysis program that allows both high-throughput processing of multiple sequence alignments along with the flexibility to simulate data under complex demographic scenarios is currently lacking. We introduce a new program, named DnaSAM, which allows high-throughput estimation of DNA sequence diversity and neutrality statistics from experimental data along with the ability to test those statistics via Monte Carlo coalescent simulations. These simulations are conducted using the ms program, which is able to incorporate several genetic parameters (e.g. recombination) and demographic scenarios (e.g. population bottlenecks). The output is a set of diversity and neutrality statistics with associated probability values under a user-specified null model that are stored in easy to manipulate text file. © 2009 Blackwell Publishing Ltd.

  4. Pilot self-coding applied in optical OFDM systems

    NASA Astrophysics Data System (ADS)

    Li, Changping; Yi, Ying; Lee, Kyesan

    2015-04-01

    This paper studies the frequency offset correction technique which can be applied in optical OFDM systems. Through theoretical analysis and computer simulations, we can observe that our proposed scheme named pilot self-coding (PSC) has a distinct influence for rectifying the frequency offset, which could mitigate the OFDM performance deterioration because of inter-carrier interference and common phase error. The main approach is to assign a pilot subcarrier before data subcarriers and copy this subcarrier sequence to the symmetric side. The simulation results verify that our proposed PSC is indeed effective against the high degree of frequency offset.

  5. Cloud GPU-based simulations for SQUAREMR.

    PubMed

    Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H

    2017-01-01

    Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Aromatic sulfonation with sulfur trioxide: mechanism and kinetic model.

    PubMed

    Moors, Samuel L C; Deraet, Xavier; Van Assche, Guy; Geerlings, Paul; De Proft, Frank

    2017-01-01

    Electrophilic aromatic sulfonation of benzene with sulfur trioxide is studied with ab initio molecular dynamics simulations in gas phase, and in explicit noncomplexing (CCl 3 F) and complexing (CH 3 NO 2 ) solvent models. We investigate different possible reaction pathways, the number of SO 3 molecules participating in the reaction, and the influence of the solvent. Our simulations confirm the existence of a low-energy concerted pathway with formation of a cyclic transition state with two SO 3 molecules. Based on the simulation results, we propose a sequence of elementary reaction steps and a kinetic model compatible with experimental data. Furthermore, a new alternative reaction pathway is proposed in complexing solvent, involving two SO 3 and one CH 3 NO 2 .

  7. Computer simulation of random variables and vectors with arbitrary probability distribution laws

    NASA Technical Reports Server (NTRS)

    Bogdan, V. M.

    1981-01-01

    Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.

  8. Genotype calling from next-generation sequencing data using haplotype information of reads

    PubMed Central

    Zhi, Degui; Wu, Jihua; Liu, Nianjun; Zhang, Kui

    2012-01-01

    Motivation: Low coverage sequencing provides an economic strategy for whole genome sequencing. When sequencing a set of individuals, genotype calling can be challenging due to low sequencing coverage. Linkage disequilibrium (LD) based refinement of genotyping calling is essential to improve the accuracy. Current LD-based methods use read counts or genotype likelihoods at individual potential polymorphic sites (PPSs). Reads that span multiple PPSs (jumping reads) can provide additional haplotype information overlooked by current methods. Results: In this article, we introduce a new Hidden Markov Model (HMM)-based method that can take into account jumping reads information across adjacent PPSs and implement it in the HapSeq program. Our method extends the HMM in Thunder and explicitly models jumping reads information as emission probabilities conditional on the states of adjacent PPSs. Our simulation results show that, compared to Thunder, HapSeq reduces the genotyping error rate by 30%, from 0.86% to 0.60%. The results from the 1000 Genomes Project show that HapSeq reduces the genotyping error rate by 12 and 9%, from 2.24% and 2.76% to 1.97% and 2.50% for individuals with European and African ancestry, respectively. We expect our program can improve genotyping qualities of the large number of ongoing and planned whole genome sequencing projects. Contact: dzhi@ms.soph.uab.edu; kzhang@ms.soph.uab.edu Availability: The software package HapSeq and its manual can be found and downloaded at www.ssg.uab.edu/hapseq/. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22285565

  9. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  10. Residual Stresses and Critical Initial Flaw Size Analyses of Welds

    NASA Technical Reports Server (NTRS)

    Brust, Frederick W.; Raju, Ivatury, S.; Dawocke, David S.; Cheston, Derrick

    2009-01-01

    An independent assessment was conducted to determine the critical initial flaw size (CIFS) for the flange-to-skin weld in the Ares I-X Upper Stage Simulator (USS). A series of weld analyses are performed to determine the residual stresses in a critical region of the USS. Weld residual stresses both increase constraint and mean stress thereby having an important effect on the fatigue life. The purpose of the weld analyses was to model the weld process using a variety of sequences to determine the 'best' sequence in terms of weld residual stresses and distortions. The many factors examined in this study include weld design (single-V, double-V groove), weld sequence, boundary conditions, and material properties, among others. The results of this weld analysis are included with service loads to perform a fatigue and critical initial flaw size evaluation.

  11. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  12. Mechanisms controlling the complete accretionary beach state sequence

    NASA Astrophysics Data System (ADS)

    Dubarbier, Benjamin; Castelle, Bruno; Ruessink, Gerben; Marieu, Vincent

    2017-06-01

    Accretionary downstate beach sequence is a key element of observed nearshore morphological variability along sandy coasts. We present and analyze the first numerical simulation of such a sequence using a process-based morphodynamic model that solves the coupling between waves, depth-integrated currents, and sediment transport. The simulation evolves from an alongshore uniform barred beach (storm profile) to an almost featureless shore-welded terrace (summer profile) through the highly alongshore variable detached crescentic bar and transverse bar/rip system states. A global analysis of the full sequence allows determining the varying contributions of the different hydro-sedimentary processes. Sediment transport driven by orbital velocity skewness is critical to the overall onshore sandbar migration, while gravitational downslope sediment transport acts as a damping term inhibiting further channel growth enforced by rip flow circulation. Accurate morphological diffusivity and inclusion of orbital velocity skewness opens new perspectives in terms of morphodynamic modeling of real beaches.

  13. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  14. Evolutionary distances in the twilight zone--a rational kernel approach.

    PubMed

    Schwarz, Roland F; Fletcher, William; Förster, Frank; Merget, Benjamin; Wolf, Matthias; Schultz, Jörg; Markowetz, Florian

    2010-12-31

    Phylogenetic tree reconstruction is traditionally based on multiple sequence alignments (MSAs) and heavily depends on the validity of this information bottleneck. With increasing sequence divergence, the quality of MSAs decays quickly. Alignment-free methods, on the other hand, are based on abstract string comparisons and avoid potential alignment problems. However, in general they are not biologically motivated and ignore our knowledge about the evolution of sequences. Thus, it is still a major open question how to define an evolutionary distance metric between divergent sequences that makes use of indel information and known substitution models without the need for a multiple alignment. Here we propose a new evolutionary distance metric to close this gap. It uses finite-state transducers to create a biologically motivated similarity score which models substitutions and indels, and does not depend on a multiple sequence alignment. The sequence similarity score is defined in analogy to pairwise alignments and additionally has the positive semi-definite property. We describe its derivation and show in simulation studies and real-world examples that it is more accurate in reconstructing phylogenies than competing methods. The result is a new and accurate way of determining evolutionary distances in and beyond the twilight zone of sequence alignments that is suitable for large datasets.

  15. Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.

    PubMed

    Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw

    2018-07-01

    Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. On the Power and the Systematic Biases of the Detection of Chromosomal Inversions by Paired-End Genome Sequencing

    PubMed Central

    Lucas Lledó, José Ignacio; Cáceres, Mario

    2013-01-01

    One of the most used techniques to study structural variation at a genome level is paired-end mapping (PEM). PEM has the advantage of being able to detect balanced events, such as inversions and translocations. However, inversions are still quite difficult to predict reliably, especially from high-throughput sequencing data. We simulated realistic PEM experiments with different combinations of read and library fragment lengths, including sequencing errors and meaningful base-qualities, to quantify and track down the origin of false positives and negatives along sequencing, mapping, and downstream analysis. We show that PEM is very appropriate to detect a wide range of inversions, even with low coverage data. However, % of inversions located between segmental duplications are expected to go undetected by the most common sequencing strategies. In general, longer DNA libraries improve the detectability of inversions far better than increments of the coverage depth or the read length. Finally, we review the performance of three algorithms to detect inversions —SVDetect, GRIAL, and VariationHunter—, identify common pitfalls, and reveal important differences in their breakpoint precisions. These results stress the importance of the sequencing strategy for the detection of structural variants, especially inversions, and offer guidelines for the design of future genome sequencing projects. PMID:23637806

  17. A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data

    PubMed Central

    Feng, Hao; Conneely, Karen N.; Wu, Hao

    2014-01-01

    DNA methylation is an important epigenetic modification that has essential roles in cellular processes including gene regulation, development and disease and is widely dysregulated in most types of cancer. Recent advances in sequencing technology have enabled the measurement of DNA methylation at single nucleotide resolution through methods such as whole-genome bisulfite sequencing and reduced representation bisulfite sequencing. In DNA methylation studies, a key task is to identify differences under distinct biological contexts, for example, between tumor and normal tissue. A challenge in sequencing studies is that the number of biological replicates is often limited by the costs of sequencing. The small number of replicates leads to unstable variance estimation, which can reduce accuracy to detect differentially methylated loci (DML). Here we propose a novel statistical method to detect DML when comparing two treatment groups. The sequencing counts are described by a lognormal-beta-binomial hierarchical model, which provides a basis for information sharing across different CpG sites. A Wald test is developed for hypothesis testing at each CpG site. Simulation results show that the proposed method yields improved DML detection compared to existing methods, particularly when the number of replicates is low. The proposed method is implemented in the Bioconductor package DSS. PMID:24561809

  18. Aggregation of peptides in the tube model with correlated sidechain orientations

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Ba; Hoang, Trinh Xuan

    2015-06-01

    The ability of proteins and peptides to aggregate and form toxic amyloid fibrils is associated with a range of diseases including BSE (or mad cow), Alzheimer's and Parkinson's Diseases. In this study, we investigate the the role of amino acid sequence in the aggregation propensity by using a modified tube model with a new procedure for hydrophobic interaction. In this model, the amino acid sidechains are not considered explicitly, but their orientations are taken into account in the formation of hydrophobic contact. Extensive Monte Carlo simulations for systems of short peptides are carried out with the use of parallel tempering technique. Our results show that the propensity to form and the structures of the aggregates strongly depend on the amino acid sequence and the number of peptides. Some sequences may not aggregate at all at a presumable physiological temperature while other can easily form fibril-like, β-sheet struture. Our study provides an insight into the principles of how the formation of amyloid can be governed by amino acid sequence.

  19. Interpretable dimensionality reduction of single cell transcriptome data with deep generative models.

    PubMed

    Ding, Jiarui; Condon, Anne; Shah, Sohrab P

    2018-05-21

    Single-cell RNA-sequencing has great potential to discover cell types, identify cell states, trace development lineages, and reconstruct the spatial organization of cells. However, dimension reduction to interpret structure in single-cell sequencing data remains a challenge. Existing algorithms are either not able to uncover the clustering structures in the data or lose global information such as groups of clusters that are close to each other. We present a robust statistical model, scvis, to capture and visualize the low-dimensional structures in single-cell gene expression data. Simulation results demonstrate that low-dimensional representations learned by scvis preserve both the local and global neighbor structures in the data. In addition, scvis is robust to the number of data points and learns a probabilistic parametric mapping function to add new data points to an existing embedding. We then use scvis to analyze four single-cell RNA-sequencing datasets, exemplifying interpretable two-dimensional representations of the high-dimensional single-cell RNA-sequencing data.

  20. Low-Field Nuclear Polarization Using Nitrogen Vacancy Centers in Diamonds

    NASA Astrophysics Data System (ADS)

    Hovav, Y.; Naydenov, B.; Jelezko, F.; Bar-Gill, N.

    2018-02-01

    It was recently demonstrated that bulk nuclear polarization can be obtained using nitrogen vacancy (NV) color centers in diamonds, even at ambient conditions. This is based on the optical polarization of the NV electron spin, and using several polarization transfer methods. One such method is the nuclear orientation via electron spin locking (NOVEL) sequence, where a spin-locked sequence is applied on the NV spin, with a microwave power equal to the nuclear precession frequency. This was performed at relatively high fields, to allow for both polarization transfer and noise decoupling. As a result, this scheme requires accurate magnetic field alignment in order preserve the NV properties. Such a requirement may be undesired or impractical in many practical scenarios. Here we present a new sequence, termed the refocused NOVEL, which can be used for polarization transfer (and detection) even at low fields. Numerical simulations are performed, taking into account both the spin Hamiltonian and spin decoherence, and we show that, under realistic parameters, it can outperform the NOVEL sequence.

  1. Investigation of the design of a metal-lined fully wrapped composite vessel under high internal pressure

    NASA Astrophysics Data System (ADS)

    Kalaycıoğlu, Barış; Husnu Dirikolu, M.

    2010-09-01

    In this study, a Type III composite pressure vessel (ISO 11439:2000) loaded with high internal pressure is investigated in terms of the effect of the orientation of the element coordinate system while simulating the continuous variation of the fibre angle, the effect of symmetric and non-symmetric composite wall stacking sequences, and lastly, a stacking sequence evaluation for reducing the cylindrical section-end cap transition region stress concentration. The research was performed using an Ansys® model with 2.9 l volume, 6061 T6 aluminium liner/Kevlar® 49-Epoxy vessel material, and a service internal pressure loading of 22 MPa. The results show that symmetric stacking sequences give higher burst pressures by up to 15%. Stacking sequence evaluations provided a further 7% pressure-carrying capacity as well as reduced stress concentration in the transition region. Finally, the Type III vessel under consideration provides a 45% lighter construction as compared with an all metal (Type I) vessel.

  2. Two-dimensional honeycomb network through sequence-controlled self-assembly of oligopeptides.

    PubMed

    Abb, Sabine; Harnau, Ludger; Gutzler, Rico; Rauschenbach, Stephan; Kern, Klaus

    2016-01-12

    The sequence of a peptide programs its self-assembly and hence the expression of specific properties through non-covalent interactions. A large variety of peptide nanostructures has been designed employing different aspects of these non-covalent interactions, such as dispersive interactions, hydrogen bonding or ionic interactions. Here we demonstrate the sequence-controlled fabrication of molecular nanostructures using peptides as bio-organic building blocks for two-dimensional (2D) self-assembly. Scanning tunnelling microscopy reveals changes from compact or linear assemblies (angiotensin I) to long-range ordered, chiral honeycomb networks (angiotensin II) as a result of removal of steric hindrance by sequence modification. Guided by our observations, molecular dynamic simulations yield atomistic models for the elucidation of interpeptide-binding motifs. This new approach to 2D self-assembly on surfaces grants insight at the atomic level that will enable the use of oligo- and polypeptides as large, multi-functional bio-organic building blocks, and opens a new route towards rationally designed, bio-inspired surfaces.

  3. MToolBox: a highly automated pipeline for heteroplasmy annotation and prioritization analysis of human mitochondrial variants in high-throughput sequencing

    PubMed Central

    Diroma, Maria Angela; Santorsola, Mariangela; Guttà, Cristiano; Gasparre, Giuseppe; Picardi, Ernesto; Pesole, Graziano; Attimonelli, Marcella

    2014-01-01

    Motivation: The increasing availability of mitochondria-targeted and off-target sequencing data in whole-exome and whole-genome sequencing studies (WXS and WGS) has risen the demand of effective pipelines to accurately measure heteroplasmy and to easily recognize the most functionally important mitochondrial variants among a huge number of candidates. To this purpose, we developed MToolBox, a highly automated pipeline to reconstruct and analyze human mitochondrial DNA from high-throughput sequencing data. Results: MToolBox implements an effective computational strategy for mitochondrial genomes assembling and haplogroup assignment also including a prioritization analysis of detected variants. MToolBox provides a Variant Call Format file featuring, for the first time, allele-specific heteroplasmy and annotation files with prioritized variants. MToolBox was tested on simulated samples and applied on 1000 Genomes WXS datasets. Availability and implementation: MToolBox package is available at https://sourceforge.net/projects/mtoolbox/. Contact: marcella.attimonelli@uniba.it Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028726

  4. An Investigation of Ionic Flows in a Sphere-Plate Electrode Gap

    NASA Astrophysics Data System (ADS)

    Z. Alisoy, H.; Alagoz, S.; T. Alisoy, G.; B. Alagoz, B.

    2013-10-01

    This paper presents analyses of ion flow characteristics and ion discharge pulses in a sphere-ground plate electrode system. As a result of variation in electric field intensity in the electrode gap, the ion flows towards electrodes generate non-uniform discharging pulses. Inspection of these pulses provides useful information on ionic stream kinetics, the effective thickness of ion cover around electrodes, and the timing of ion clouds discharge pulse sequences. A finite difference time domain (FDTD) based space-charge motion simulation is used for the numerical analysis of the spatio-temporal development of ionic flows following the first Townsend avalanche, and the simulation results demonstrate expansion of the positive ion flow and compression of the negative ion flow, which results in non-uniform discharge pulse characteristics.

  5. Numeric stratigraphic modeling: Testing sequence Numeric stratigraphic modeling: Testing sequence stratigraphic concepts using high resolution geologic examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.

    1996-08-01

    Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less

  6. Protein Folding and Structure Prediction from the Ground Up: The Atomistic Associative Memory, Water Mediated, Structure and Energy Model.

    PubMed

    Chen, Mingchen; Lin, Xingcheng; Zheng, Weihua; Onuchic, José N; Wolynes, Peter G

    2016-08-25

    The associative memory, water mediated, structure and energy model (AWSEM) is a coarse-grained force field with transferable tertiary interactions that incorporates local in sequence energetic biases using bioinformatically derived structural information about peptide fragments with locally similar sequences that we call memories. The memory information from the protein data bank (PDB) database guides proper protein folding. The structural information about available sequences in the database varies in quality and can sometimes lead to frustrated free energy landscapes locally. One way out of this difficulty is to construct the input fragment memory information from all-atom simulations of portions of the complete polypeptide chain. In this paper, we investigate this approach first put forward by Kwac and Wolynes in a more complete way by studying the structure prediction capabilities of this approach for six α-helical proteins. This scheme which we call the atomistic associative memory, water mediated, structure and energy model (AAWSEM) amounts to an ab initio protein structure prediction method that starts from the ground up without using bioinformatic input. The free energy profiles from AAWSEM show that atomistic fragment memories are sufficient to guide the correct folding when tertiary forces are included. AAWSEM combines the efficiency of coarse-grained simulations on the full protein level with the local structural accuracy achievable from all-atom simulations of only parts of a large protein. The results suggest that a hybrid use of atomistic fragment memory and database memory in structural predictions may well be optimal for many practical applications.

  7. Molecular dynamics study of some non-hydrogen-bonding base pair DNA strands

    NASA Astrophysics Data System (ADS)

    Tiwari, Rakesh K.; Ojha, Rajendra P.; Tiwari, Gargi; Pandey, Vishnudatt; Mall, Vijaysree

    2018-05-01

    In order to elucidate the structural activity of hydrophobic modified DNA, the DMMO2-D5SICS, base pair is introduced as a constituent in different set of 12-mer and 14-mer DNA sequences for the molecular dynamics (MD) simulation in explicit water solvent. AMBER 14 force field was employed for each set of duplex during the 200ns production-dynamics simulation in orthogonal-box-water solvent by the Particle-Mesh-Ewald (PME) method in infinite periodic boundary conditions (PBC) to determine conformational parameters of the complex. The force-field parameters of modified base-pair were calculated by Gaussian-code using Hartree-Fock /ab-initio methodology. RMSD Results reveal that the conformation of the duplex is sequence dependent and the binding energy of the complex depends on the position of the modified base-pair in the nucleic acid strand. We found that non-bonding energy had a significant contribution to stabilising such type of duplex in comparison to electrostatic energy. The distortion produced within strands by such type of base-pair was local and destabilised the duplex integrity near to substitution, moreover the binding energy of duplex depends on the position of substitution of hydrophobic base-pair and the DNA sequence and strongly supports the corresponding experimental study.

  8. CORNAS: coverage-dependent RNA-Seq analysis of gene expression data without biological replicates.

    PubMed

    Low, Joel Z B; Khang, Tsung Fei; Tammi, Martti T

    2017-12-28

    In current statistical methods for calling differentially expressed genes in RNA-Seq experiments, the assumption is that an adjusted observed gene count represents an unknown true gene count. This adjustment usually consists of a normalization step to account for heterogeneous sample library sizes, and then the resulting normalized gene counts are used as input for parametric or non-parametric differential gene expression tests. A distribution of true gene counts, each with a different probability, can result in the same observed gene count. Importantly, sequencing coverage information is currently not explicitly incorporated into any of the statistical models used for RNA-Seq analysis. We developed a fast Bayesian method which uses the sequencing coverage information determined from the concentration of an RNA sample to estimate the posterior distribution of a true gene count. Our method has better or comparable performance compared to NOISeq and GFOLD, according to the results from simulations and experiments with real unreplicated data. We incorporated a previously unused sequencing coverage parameter into a procedure for differential gene expression analysis with RNA-Seq data. Our results suggest that our method can be used to overcome analytical bottlenecks in experiments with limited number of replicates and low sequencing coverage. The method is implemented in CORNAS (Coverage-dependent RNA-Seq), and is available at https://github.com/joel-lzb/CORNAS .

  9. Combinational Circuit Obfuscation Through Power Signature Manipulation

    DTIC Science & Technology

    2011-06-01

    Algorithm produced by SID . . . . . . . . . . . . . . . . . . . . . . 80 Appendix B . Power Signature Estimation Results 2 . . . . . . . . . . 85 B .1 Power...Signature for c264 Circuit Variant per Algorithm produced by SPICE Simulation . . . . . . . . . . . . . . 85 B .2 Power Signature for c5355 and c499...Smart SSR selecting rear level components and gates with 1000 iterations . . . . . . . . . 84 B .1. Power Signature for c264 By Random Sequence

  10. Evaluation of microRNA alignment techniques

    PubMed Central

    Kaspi, Antony; El-Osta, Assam

    2016-01-01

    Genomic alignment of small RNA (smRNA) sequences such as microRNAs poses considerable challenges due to their short length (∼21 nucleotides [nt]) as well as the large size and complexity of plant and animal genomes. While several tools have been developed for high-throughput mapping of longer mRNA-seq reads (>30 nt), there are few that are specifically designed for mapping of smRNA reads including microRNAs. The accuracy of these mappers has not been systematically determined in the case of smRNA-seq. In addition, it is unknown whether these aligners accurately map smRNA reads containing sequence errors and polymorphisms. By using simulated read sets, we determine the alignment sensitivity and accuracy of 16 short-read mappers and quantify their robustness to mismatches, indels, and nontemplated nucleotide additions. These were explored in the context of a plant genome (Oryza sativa, ∼500 Mbp) and a mammalian genome (Homo sapiens, ∼3.1 Gbp). Analysis of simulated and real smRNA-seq data demonstrates that mapper selection impacts differential expression results and interpretation. These results will inform on best practice for smRNA mapping and enable more accurate smRNA detection and quantification of expression and RNA editing. PMID:27284164

  11. Hydrodynamic Radii of Intrinsically Disordered Proteins Determined from Experimental Polyproline II Propensities

    PubMed Central

    Tomasso, Maria E.; Tarver, Micheal J.; Devarajan, Deepa; Whitten, Steven T.

    2016-01-01

    The properties of disordered proteins are thought to depend on intrinsic conformational propensities for polyproline II (PP II) structure. While intrinsic PP II propensities have been measured for the common biological amino acids in short peptides, the ability of these experimentally determined propensities to quantitatively reproduce structural behavior in intrinsically disordered proteins (IDPs) has not been established. Presented here are results from molecular simulations of disordered proteins showing that the hydrodynamic radius (R h) can be predicted from experimental PP II propensities with good agreement, even when charge-based considerations are omitted. The simulations demonstrate that R h and chain propensity for PP II structure are linked via a simple power-law scaling relationship, which was tested using the experimental R h of 22 IDPs covering a wide range of peptide lengths, net charge, and sequence composition. Charge effects on R h were found to be generally weak when compared to PP II effects on R h. Results from this study indicate that the hydrodynamic dimensions of IDPs are evidence of considerable sequence-dependent backbone propensities for PP II structure that qualitatively, if not quantitatively, match conformational propensities measured in peptides. PMID:26727467

  12. Designing Better Scaffolding in Teaching Complex Systems with Graphical Simulations

    NASA Astrophysics Data System (ADS)

    Li, Na

    Complex systems are an important topic in science education today, but they are usually difficult for secondary-level students to learn. Although graphic simulations have many advantages in teaching complex systems, scaffolding is a critical factor for effective learning. This dissertation study was conducted around two complementary research questions on scaffolding: (1) How can we chunk and sequence learning activities in teaching complex systems? (2) How can we help students make connections among system levels across learning activities (level bridging)? With a sample of 123 seventh-graders, this study employed a 3x2 experimental design that factored sequencing methods (independent variable 1; three levels) with level-bridging scaffolding (independent variable 2; two levels) and compared the effectiveness of each combination. The study measured two dependent variables: (1) knowledge integration (i.e., integrating and connecting content-specific normative concepts and providing coherent scientific explanations); (2) understanding of the deep causal structure (i.e., being able to grasp and transfer the causal knowledge of a complex system). The study used a computer-based simulation environment as the research platform to teach the ideal gas law as a system. The ideal gas law is an emergent chemical system that has three levels: (1) experiential macro level (EM) (e.g., an aerosol can explodes when it is thrown into the fire); (2) abstract macro level (AM) (i.e., the relationships among temperature, pressure and volume); (3) micro level (Mi) (i.e., molecular activity). The sequencing methods of these levels were manipulated by changing the order in which they were delivered with three possibilities: (1) EM-AM-Mi; (2) Mi-AM-EM; (3) AM-Mi-EM. The level-bridging scaffolding variable was manipulated on two aspects: (1) inserting inter-level questions among learning activities; (2) two simulations dynamically linked in the final learning activity. Addressing the first research question, the Experiential macro-Abstract macro-Micro (EM-AM-Mi) sequencing method, following the "concrete to abstract" principle, produced better knowledge integration while the Micro-Abstract macro-Experiential macro (Mi-AM-EM) sequencing method, congruent with the causal direction of the emergent system, produced better understanding of the deep causal structure only when level-bridging scaffolding was provided. The Abstract macro-Micro-Experiential macro (AM-Mi-EM) sequencing method produced worse performance in general, because it did not follow the "concrete to abstract" principle, nor did it align with the causal structure of the emergent system. As to the second research question, the results showed that level-bridging scaffolding was important for both knowledge integration and understanding of the causal structure in learning the ideal gas law system.

  13. PathoScope 2.0: a complete computational framework for strain identification in environmental or clinical sequencing samples

    PubMed Central

    2014-01-01

    Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611

  14. RSRM top hat cover simulator lightning test, volume 2. Appendix A: Resistance measurements. Appendix B: Lightning test data plots

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Resistance measurements are given in graphical for when a simulated lightning discharge strikes on an exposed top hat cover simulator. The test sequence was to measure the electric and magnetic fields induced inside a redesigned solid rocket motor case.

  15. [Computer simulation of a clinical magnet resonance tomography scanner for training purposes].

    PubMed

    Hackländer, T; Mertens, H; Cramer, B M

    2004-08-01

    The idea for this project was born by the necessity to offer medical students an easy approach to the theoretical basics of magnetic resonance imaging. The aim was to simulate the features and functions of such a scanner on a commercially available computer by means of a computer program. The simulation was programmed in pure Java under the GNU General Public License and is freely available for a commercially available computer with Windows, Macintosh or Linux operating system. The graphic user interface is oriented to a real scanner. In an external program parameter, images for the proton density and the relaxation times T1 and T2 are calculated on the basis of clinical examinations. From this, the image calculation is carried out in the simulation program pixel by pixel on the basis of a pulse sequence chosen and modified by the user. The images can be stored and printed. In addition, it is possible to display and modify k-space images. Seven classes of pulse sequences are implemented and up to 14 relevant sequence parameters, such as repetition time and echo time, can be altered. Aliasing and motion artifacts can be simulated. As the image calculation only takes a few seconds, interactive working is possible. The simulation has been used in the university education for more than 1 year, successfully illustrating the dependence of the MR images on the measuring parameters. This should facititate the approach of students to the understanding MR imaging in the future.

  16. Deciphering the molecular mechanisms underlying the binding of the TWIST1/E12 complex to regulatory E-box sequences

    PubMed Central

    Bouard, Charlotte; Terreux, Raphael; Honorat, Mylène; Manship, Brigitte; Ansieau, Stéphane; Vigneron, Arnaud M.; Puisieux, Alain; Payen, Léa

    2016-01-01

    Abstract The TWIST1 bHLH transcription factor controls embryonic development and cancer processes. Although molecular and genetic analyses have provided a wealth of data on the role of bHLH transcription factors, very little is known on the molecular mechanisms underlying their binding affinity to the E-box sequence of the promoter. Here, we used an in silico model of the TWIST1/E12 (TE) heterocomplex and performed molecular dynamics (MD) simulations of its binding to specific (TE-box) and modified E-box sequences. We focused on (i) active E-box and inactive E-box sequences, on (ii) modified active E-box sequences, as well as on (iii) two box sequences with modified adjacent bases the AT- and TA-boxes. Our in silico models were supported by functional in vitro binding assays. This exploration highlighted the predominant role of protein side-chain residues, close to the heart of the complex, at anchoring the dimer to DNA sequences, and unveiled a shift towards adjacent ((-1) and (-1*)) bases and conserved bases of modified E-box sequences. In conclusion, our study provides proof of the predictive value of these MD simulations, which may contribute to the characterization of specific inhibitors by docking approaches, and their use in pharmacological therapies by blocking the tumoral TWIST1/E12 function in cancers. PMID:27151200

  17. In silico evolution of the Drosophila gap gene regulatory sequence under elevated mutational pressure.

    PubMed

    Chertkova, Aleksandra A; Schiffman, Joshua S; Nuzhdin, Sergey V; Kozlov, Konstantin N; Samsonova, Maria G; Gursky, Vitaly V

    2017-02-07

    Cis-regulatory sequences are often composed of many low-affinity transcription factor binding sites (TFBSs). Determining the evolutionary and functional importance of regulatory sequence composition is impeded without a detailed knowledge of the genotype-phenotype map. We simulate the evolution of regulatory sequences involved in Drosophila melanogaster embryo segmentation during early development. Natural selection evaluates gene expression dynamics produced by a computational model of the developmental network. We observe a dramatic decrease in the total number of transcription factor binding sites through the course of evolution. Despite a decrease in average sequence binding energies through time, the regulatory sequences tend towards organisations containing increased high affinity transcription factor binding sites. Additionally, the binding energies of separate sequence segments demonstrate ubiquitous mutual correlations through time. Fewer than 10% of initial TFBSs are maintained throughout the entire simulation, deemed 'core' sites. These sites have increased functional importance as assessed under wild-type conditions and their binding energy distributions are highly conserved. Furthermore, TFBSs within close proximity of core sites exhibit increased longevity, reflecting functional regulatory interactions with core sites. In response to elevated mutational pressure, evolution tends to sample regulatory sequence organisations with fewer, albeit on average, stronger functional transcription factor binding sites. These organisations are also shaped by the regulatory interactions among core binding sites with sites in their local vicinity.

  18. Recent Productivity Improvements to the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Popernack, Thomas G., Jr.; Sydnor, George H.

    1998-01-01

    Productivity gains have recently been made at the National Transonic Facility wind tunnel at NASA Langley Research Center. A team was assigned to assess and set productivity goals to achieve the desired operating cost and output of the facility. Simulations have been developed to show the sensitivity of selected process productivity improvements in critical areas to reduce overall test cycle times. The improvements consist of an expanded liquid nitrogen storage system, a new fan drive, a new tunnel vent stack heater, replacement of programmable logic controllers, an increased data communications speed, automated test sequencing, and a faster model changeout system. Where possible, quantifiable results of these improvements are presented. Results show that in most cases, improvements meet the productivity gains predicted by the simulations.

  19. Issues on machine learning for prediction of classes among molecular sequences of plants and animals

    NASA Astrophysics Data System (ADS)

    Stehlik, Milan; Pant, Bhasker; Pant, Kumud; Pardasani, K. R.

    2012-09-01

    Nowadays major laboratories of the world are turning towards in-silico experimentation due to their ease, reproducibility and accuracy. The ethical issues concerning wet lab experimentations are also minimal in in-silico experimentations. But before we turn fully towards dry lab simulations it is necessary to understand the discrepancies and bottle necks involved with dry lab experimentations. It is necessary before reporting any result using dry lab simulations to perform in-depth statistical analysis of the data. Keeping same in mind here we are presenting a collaborative effort to correlate findings and results of various machine learning algorithms and checking underlying regressions and mutual dependencies so as to develop an optimal classifier and predictors.

  20. Biophysical and structural considerations for protein sequence evolution

    PubMed Central

    2011-01-01

    Background Protein sequence evolution is constrained by the biophysics of folding and function, causing interdependence between interacting sites in the sequence. However, current site-independent models of sequence evolutions do not take this into account. Recent attempts to integrate the influence of structure and biophysics into phylogenetic models via statistical/informational approaches have not resulted in expected improvements in model performance. This suggests that further innovations are needed for progress in this field. Results Here we develop a coarse-grained physics-based model of protein folding and binding function, and compare it to a popular informational model. We find that both models violate the assumption of the native sequence being close to a thermodynamic optimum, causing directional selection away from the native state. Sampling and simulation show that the physics-based model is more specific for fold-defining interactions that vary less among residue type. The informational model diffuses further in sequence space with fewer barriers and tends to provide less support for an invariant sites model, although amino acid substitutions are generally conservative. Both approaches produce sequences with natural features like dN/dS < 1 and gamma-distributed rates across sites. Conclusions Simple coarse-grained models of protein folding can describe some natural features of evolving proteins but are currently not accurate enough to use in evolutionary inference. This is partly due to improper packing of the hydrophobic core. We suggest possible improvements on the representation of structure, folding energy, and binding function, as regards both native and non-native conformations, and describe a large number of possible applications for such a model. PMID:22171550

  1. 40 CFR 86.1773-99 - Test sequence; general requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... test simulation procedures, AC1 and AC2, for the 2001 to 2003 model years only. If a manufacturer desires to conduct an alternative SC03 test simulation other than AC1 and AC2, or the AC1 and AC2 simulations for the 2004 and subsequent model years, the simulation test procedure must be approved in advance...

  2. 40 CFR 86.1773-99 - Test sequence; general requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... test simulation procedures, AC1 and AC2, for the 2001 to 2003 model years only. If a manufacturer desires to conduct an alternative SC03 test simulation other than AC1 and AC2, or the AC1 and AC2 simulations for the 2004 and subsequent model years, the simulation test procedure must be approved in advance...

  3. 40 CFR 86.1773-99 - Test sequence; general requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... test simulation procedures, AC1 and AC2, for the 2001 to 2003 model years only. If a manufacturer desires to conduct an alternative SC03 test simulation other than AC1 and AC2, or the AC1 and AC2 simulations for the 2004 and subsequent model years, the simulation test procedure must be approved in advance...

  4. Impact of Sampling Schemes on Demographic Inference: An Empirical Study in Two Species with Different Mating Systems and Demographic Histories

    PubMed Central

    St. Onge, K. R.; Palmé, A. E.; Wright, S. I.; Lascoux, M.

    2012-01-01

    Most species have at least some level of genetic structure. Recent simulation studies have shown that it is important to consider population structure when sampling individuals to infer past population history. The relevance of the results of these computer simulations for empirical studies, however, remains unclear. In the present study, we use DNA sequence datasets collected from two closely related species with very different histories, the selfing species Capsella rubella and its outcrossing relative C. grandiflora, to assess the impact of different sampling strategies on summary statistics and the inference of historical demography. Sampling strategy did not strongly influence the mean values of Tajima’s D in either species, but it had some impact on the variance. The general conclusions about demographic history were comparable across sampling schemes even when resampled data were analyzed with approximate Bayesian computation (ABC). We used simulations to explore the effects of sampling scheme under different demographic models. We conclude that when sequences from modest numbers of loci (<60) are analyzed, the sampling strategy is generally of limited importance. The same is true under intermediate or high levels of gene flow (4Nm > 2–10) in models in which global expansion is combined with either local expansion or hierarchical population structure. Although we observe a less severe effect of sampling than predicted under some earlier simulation models, our results should not be seen as an encouragement to neglect this issue. In general, a good coverage of the natural range, both within and between populations, will be needed to obtain a reliable reconstruction of a species’s demographic history, and in fact, the effect of sampling scheme on polymorphism patterns may itself provide important information about demographic history. PMID:22870403

  5. The use of multiobjective calibration and regional sensitivity analysis in simulating hyporheic exchange

    USGS Publications Warehouse

    Naranjo, Ramon C.; Niswonger, Richard G.; Stone, Mark; Davis, Clinton; McKay, Alan

    2012-01-01

    We describe an approach for calibrating a two-dimensional (2-D) flow model of hyporheic exchange using observations of temperature and pressure to estimate hydraulic and thermal properties. A longitudinal 2-D heat and flow model was constructed for a riffle-pool sequence to simulate flow paths and flux rates for variable discharge conditions. A uniform random sampling approach was used to examine the solution space and identify optimal values at local and regional scales. We used a regional sensitivity analysis to examine the effects of parameter correlation and nonuniqueness commonly encountered in multidimensional modeling. The results from this study demonstrate the ability to estimate hydraulic and thermal parameters using measurements of temperature and pressure to simulate exchange and flow paths. Examination of the local parameter space provides the potential for refinement of zones that are used to represent sediment heterogeneity within the model. The results indicate vertical hydraulic conductivity was not identifiable solely using pressure observations; however, a distinct minimum was identified using temperature observations. The measured temperature and pressure and estimated vertical hydraulic conductivity values indicate the presence of a discontinuous low-permeability deposit that limits the vertical penetration of seepage beneath the riffle, whereas there is a much greater exchange where the low-permeability deposit is absent. Using both temperature and pressure to constrain the parameter estimation process provides the lowest overall root-mean-square error as compared to using solely temperature or pressure observations. This study demonstrates the benefits of combining continuous temperature and pressure for simulating hyporheic exchange and flow in a riffle-pool sequence. Copyright 2012 by the American Geophysical Union.

  6. Why continuous simulation? The role of antecedent moisture in design flood estimation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S.; Westra, S.; Sharma, A.

    2012-06-01

    Continuous simulation for design flood estimation is increasingly becoming a viable alternative to traditional event-based methods. The advantage of continuous simulation approaches is that the catchment moisture state prior to the flood-producing rainfall event is implicitly incorporated within the modeling framework, provided the model has been calibrated and validated to produce reasonable simulations. This contrasts with event-based models in which both information about the expected sequence of rainfall and evaporation preceding the flood-producing rainfall event, as well as catchment storage and infiltration properties, are commonly pooled together into a single set of "loss" parameters which require adjustment through the process of calibration. To identify the importance of accounting for antecedent moisture in flood modeling, this paper uses a continuous rainfall-runoff model calibrated to 45 catchments in the Murray-Darling Basin in Australia. Flood peaks derived using the historical daily rainfall record are compared with those derived using resampled daily rainfall, for which the sequencing of wet and dry days preceding the heavy rainfall event is removed. The analysis shows that there is a consistent underestimation of the design flood events when antecedent moisture is not properly simulated, which can be as much as 30% when only 1 or 2 days of antecedent rainfall are considered, compared to 5% when this is extended to 60 days of prior rainfall. These results show that, in general, it is necessary to consider both short-term memory in rainfall associated with synoptic scale dependence, as well as longer-term memory at seasonal or longer time scale variability in order to obtain accurate design flood estimates.

  7. A Coalescent-Based Estimator of Admixture From DNA Sequences

    PubMed Central

    Wang, Jinliang

    2006-01-01

    A variety of estimators have been developed to use genetic marker information in inferring the admixture proportions (parental contributions) of a hybrid population. The majority of these estimators used allele frequency data, ignored molecular information that is available in markers such as microsatellites and DNA sequences, and assumed that mutations are absent since the admixture event. As a result, these estimators may fail to deliver an estimate or give rather poor estimates when admixture is ancient and thus mutations are not negligible. A previous molecular estimator based its inference of admixture proportions on the average coalescent times between pairs of genes taken from within and between populations. In this article I propose an estimator that considers the entire genealogy of all of the sampled genes and infers admixture proportions from the numbers of segregating sites in DNA sequence samples. By considering the genealogy of all sequences rather than pairs of sequences, this new estimator also allows the joint estimation of other interesting parameters in the admixture model, such as admixture time, divergence time, population size, and mutation rate. Comparative analyses of simulated data indicate that the new coalescent estimator generally yields better estimates of admixture proportions than the previous molecular estimator, especially when the parental populations are not highly differentiated. It also gives reasonably accurate estimates of other admixture parameters. A human mtDNA sequence data set was analyzed to demonstrate the method, and the analysis results are discussed and compared with those from previous studies. PMID:16624918

  8. Identification of genomic indels and structural variations using split reads

    PubMed Central

    2011-01-01

    Background Recent studies have demonstrated the genetic significance of insertions, deletions, and other more complex structural variants (SVs) in the human population. With the development of the next-generation sequencing technologies, high-throughput surveys of SVs on the whole-genome level have become possible. Here we present split-read identification, calibrated (SRiC), a sequence-based method for SV detection. Results We start by mapping each read to the reference genome in standard fashion using gapped alignment. Then to identify SVs, we score each of the many initial mappings with an assessment strategy designed to take into account both sequencing and alignment errors (e.g. scoring more highly events gapped in the center of a read). All current SV calling methods have multilevel biases in their identifications due to both experimental and computational limitations (e.g. calling more deletions than insertions). A key aspect of our approach is that we calibrate all our calls against synthetic data sets generated from simulations of high-throughput sequencing (with realistic error models). This allows us to calculate sensitivity and the positive predictive value under different parameter-value scenarios and for different classes of events (e.g. long deletions vs. short insertions). We run our calculations on representative data from the 1000 Genomes Project. Coupling the observed numbers of events on chromosome 1 with the calibrations gleaned from the simulations (for different length events) allows us to construct a relatively unbiased estimate for the total number of SVs in the human genome across a wide range of length scales. We estimate in particular that an individual genome contains ~670,000 indels/SVs. Conclusions Compared with the existing read-depth and read-pair approaches for SV identification, our method can pinpoint the exact breakpoints of SV events, reveal the actual sequence content of insertions, and cover the whole size spectrum for deletions. Moreover, with the advent of the third-generation sequencing technologies that produce longer reads, we expect our method to be even more useful. PMID:21787423

  9. A cis-regulatory logic simulator.

    PubMed

    Zeigler, Robert D; Gertz, Jason; Cohen, Barak A

    2007-07-27

    A major goal of computational studies of gene regulation is to accurately predict the expression of genes based on the cis-regulatory content of their promoters. The development of computational methods to decode the interactions among cis-regulatory elements has been slow, in part, because it is difficult to know, without extensive experimental validation, whether a particular method identifies the correct cis-regulatory interactions that underlie a given set of expression data. There is an urgent need for test expression data in which the interactions among cis-regulatory sites that produce the data are known. The ability to rapidly generate such data sets would facilitate the development and comparison of computational methods that predict gene expression patterns from promoter sequence. We developed a gene expression simulator which generates expression data using user-defined interactions between cis-regulatory sites. The simulator can incorporate additive, cooperative, competitive, and synergistic interactions between regulatory elements. Constraints on the spacing, distance, and orientation of regulatory elements and their interactions may also be defined and Gaussian noise can be added to the expression values. The simulator allows for a data transformation that simulates the sigmoid shape of expression levels from real promoters. We found good agreement between sets of simulated promoters and predicted regulatory modules from real expression data. We present several data sets that may be useful for testing new methodologies for predicting gene expression from promoter sequence. We developed a flexible gene expression simulator that rapidly generates large numbers of simulated promoters and their corresponding transcriptional output based on specified interactions between cis-regulatory sites. When appropriate rule sets are used, the data generated by our simulator faithfully reproduces experimentally derived data sets. We anticipate that using simulated gene expression data sets will facilitate the direct comparison of computational strategies to predict gene expression from promoter sequence. The source code is available online and as additional material. The test sets are available as additional material.

  10. Dynamical system modeling to simulate donor T cell response to whole exome sequencing-derived recipient peptides: Understanding randomness in alloreactivity incidence following stem cell transplantation.

    PubMed

    Koparde, Vishal; Abdul Razzaq, Badar; Suntum, Tara; Sabo, Roy; Scalora, Allison; Serrano, Myrna; Jameson-Lee, Max; Hall, Charles; Kobulnicky, David; Sheth, Nihar; Feltz, Juliana; Contaifer, Daniel; Wijesinghe, Dayanjan; Reed, Jason; Roberts, Catherine; Qayyum, Rehan; Buck, Gregory; Neale, Michael; Toor, Amir

    2017-01-01

    Quantitative relationship between the magnitude of variation in minor histocompatibility antigens (mHA) and graft versus host disease (GVHD) pathophysiology in stem cell transplant (SCT) donor-recipient pairs (DRP) is not established. In order to elucidate this relationship, whole exome sequencing (WES) was performed on 27 HLA matched related (MRD), & 50 unrelated donors (URD), to identify nonsynonymous single nucleotide polymorphisms (SNPs). An average 2,463 SNPs were identified in MRD, and 4,287 in URD DRP (p<0.01); resulting peptide antigens that may be presented on HLA class I molecules in each DRP were derived in silico (NetMHCpan ver2.0) and the tissue expression of proteins these were derived from determined (GTex). MRD DRP had an average 3,670 HLA-binding-alloreactive peptides, putative mHA (pmHA) with an IC50 of <500 nM, and URD, had 5,386 (p<0.01). To simulate an alloreactive donor cytotoxic T cell response, the array of pmHA in each patient was considered as an operator matrix modifying a hypothetical cytotoxic T cell clonal vector matrix; each responding T cell clone's proliferation was determined by the logistic equation of growth, accounting for HLA binding affinity and tissue expression of each alloreactive peptide. The resulting simulated organ-specific alloreactive T cell clonal growth revealed marked variability, with the T cell count differences spanning orders of magnitude between different DRP. Despite an estimated, uniform set of constants used in the model for all DRP, and a heterogeneously treated group of patients, higher total and organ-specific T cell counts were associated with cumulative incidence of moderate to severe GVHD in recipients. In conclusion, exome wide sequence differences and the variable alloreactive peptide binding to HLA in each DRP yields a large range of possible alloreactive donor T cell responses. Our findings also help understand the apparent randomness observed in the development of alloimmune responses.

  11. Dynamical system modeling to simulate donor T cell response to whole exome sequencing-derived recipient peptides: Understanding randomness in alloreactivity incidence following stem cell transplantation

    PubMed Central

    Suntum, Tara; Sabo, Roy; Scalora, Allison; Serrano, Myrna; Jameson-Lee, Max; Hall, Charles; Kobulnicky, David; Sheth, Nihar; Feltz, Juliana; Contaifer, Daniel; Wijesinghe, Dayanjan; Reed, Jason; Roberts, Catherine; Qayyum, Rehan; Buck, Gregory; Neale, Michael

    2017-01-01

    Quantitative relationship between the magnitude of variation in minor histocompatibility antigens (mHA) and graft versus host disease (GVHD) pathophysiology in stem cell transplant (SCT) donor-recipient pairs (DRP) is not established. In order to elucidate this relationship, whole exome sequencing (WES) was performed on 27 HLA matched related (MRD), & 50 unrelated donors (URD), to identify nonsynonymous single nucleotide polymorphisms (SNPs). An average 2,463 SNPs were identified in MRD, and 4,287 in URD DRP (p<0.01); resulting peptide antigens that may be presented on HLA class I molecules in each DRP were derived in silico (NetMHCpan ver2.0) and the tissue expression of proteins these were derived from determined (GTex). MRD DRP had an average 3,670 HLA-binding-alloreactive peptides, putative mHA (pmHA) with an IC50 of <500 nM, and URD, had 5,386 (p<0.01). To simulate an alloreactive donor cytotoxic T cell response, the array of pmHA in each patient was considered as an operator matrix modifying a hypothetical cytotoxic T cell clonal vector matrix; each responding T cell clone’s proliferation was determined by the logistic equation of growth, accounting for HLA binding affinity and tissue expression of each alloreactive peptide. The resulting simulated organ-specific alloreactive T cell clonal growth revealed marked variability, with the T cell count differences spanning orders of magnitude between different DRP. Despite an estimated, uniform set of constants used in the model for all DRP, and a heterogeneously treated group of patients, higher total and organ-specific T cell counts were associated with cumulative incidence of moderate to severe GVHD in recipients. In conclusion, exome wide sequence differences and the variable alloreactive peptide binding to HLA in each DRP yields a large range of possible alloreactive donor T cell responses. Our findings also help understand the apparent randomness observed in the development of alloimmune responses. PMID:29194460

  12. A single origin and moderate bottleneck during domestication of soybean (Glycine max): implications from microsatellites and nucleotide sequences

    PubMed Central

    Guo, Juan; Wang, Yunsheng; Song, Chi; Zhou, Jianfeng; Qiu, Lijuan; Huang, Hongwen; Wang, Ying

    2010-01-01

    Background and Aims It is essential to illuminate the evolutionary history of crop domestication in order to understand further the origin and development of modern cultivation and agronomy; however, despite being one of the most important crops, the domestication origin and bottleneck of soybean (Glycine max) are poorly understood. In the present study, microsatellites and nucleotide sequences were employed to elucidate the domestication genetics of soybean. Methods The genomes of 79 landrace soybeans (endemic cultivated soybeans) and 231 wild soybeans (G. soja) that represented the species-wide distribution of wild soybean in East Asia were scanned with 56 microsatellites to identify the genetic structure and domestication origin of soybean. To understand better the domestication bottleneck, four nucleotide sequences were selected to simulate the domestication bottleneck. Key Results Model-based analysis revealed that most of the landrace genotypes were assigned to the inferred wild soybean cluster of south China, South Korea and Japan. Phylogeny for wild and landrace soybeans showed that all landrace soybeans formed a single cluster supporting a monophyletic origin of all the cultivars. The populations of the nearest branches which were basal to the cultivar lineage were wild soybeans from south China. The coalescent simulation detected a bottleneck severity of K′ = 2 during soybean domestication, which could be explained by a foundation population of 6000 individuals if domestication duration lasted 3000 years. Conclusions As a result of integrating geographic distribution with microsatellite genotype assignment and phylogeny between landrace and wild soybeans, a single origin of soybean in south China is proposed. The coalescent simulation revealed a moderate genetic bottleneck with an effective wild soybean population used for domestication estimated to be ≈2 % of the total number of ancestral wild soybeans. Wild soybeans in Asia, especially in south China contain tremendous genetic resources for cultivar improvement. PMID:20566681

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    N, Gwilliam M; J, Collins D; O, Leach M

    Purpose: To assess the feasibility of accurately quantifying the concentration of MRI contrast agent (CA) in pulsatile flowing blood by measuring its T{sub 1}, as is common for the purposes of obtaining a patientspecific arterial input function (AIF). Dynamic contrast enhanced (DCE) - MRI and pharmacokinetic (PK) modelling is widely used to produce measures of vascular function but accurate measurement of the AIF undermines their accuracy. A proposed solution is to measure the T{sub 1} of blood in a large vessel using the Fram double flip angle method during the passage of a bolus of CA. This work expands onmore » previous work by assessing pulsatile flow and the changes in T{sub 1} seen with a CA bolus. Methods: A phantom was developed which used a physiological pump to pass fluid of a known T{sub 1} (812ms) through the centre of a head coil of a clinical 1.5T MRI scanner. Measurements were made using high temporal resolution sequences suitable for DCE-MRI and were used to validate a virtual phantom that simulated the expected errors due to pulsatile flow and bolus of CA concentration changes typically found in patients. Results: : Measured and virtual results showed similar trends, although there were differences that may be attributed to the virtual phantom not accurately simulating the spin history of the fluid before entering the imaging volume. The relationship between T{sub 1} measurement and flow speed was non-linear. T{sub 1} measurement is compromised by new spins flowing into the imaging volume, not being subject to enough excitations to have reached steady-state. The virtual phantom demonstrated a range of recorded T{sub 1} for various simulated T{sub 1} / flow rates. Conclusion: T{sub 1} measurement of flowing blood using standard DCE-MRI sequences is very challenging. Measurement error is non-linear with relation to instantaneous flow speed. Optimising sequence parameters and lowering baseline T{sub 1} of blood should be considered.« less

  14. Numerical investigation of coupled density-driven flow and hydrogeochemical processes below playas

    NASA Astrophysics Data System (ADS)

    Hamann, Enrico; Post, Vincent; Kohfahl, Claus; Prommer, Henning; Simmons, Craig T.

    2015-11-01

    Numerical modeling approaches with varying complexity were explored to investigate coupled groundwater flow and geochemical processes in saline basins. Long-term model simulations of a playa system gain insights into the complex feedback mechanisms between density-driven flow and the spatiotemporal patterns of precipitating evaporites and evolving brines. Using a reactive multicomponent transport model approach, the simulations reproduced, for the first time in a numerical study, the evaporite precipitation sequences frequently observed in saline basins ("bull's eyes"). Playa-specific flow, evapoconcentration, and chemical divides were found to be the primary controls for the location of evaporites formed, and the resulting brine chemistry. Comparative simulations with the computationally far less demanding surrogate single-species transport models showed that these were still able to replicate the major flow patterns obtained by the more complex reactive transport simulations. However, the simulated degree of salinization was clearly lower than in reactive multicomponent transport simulations. For example, in the late stages of the simulations, when the brine becomes halite-saturated, the nonreactive simulation overestimated the solute mass by almost 20%. The simulations highlight the importance of the consideration of reactive transport processes for understanding and quantifying geochemical patterns, concentrations of individual dissolved solutes, and evaporite evolution.

  15. Motion Tracking of the Carotid Artery Wall From Ultrasound Image Sequences: a Nonlinear State-Space Approach.

    PubMed

    Gao, Zhifan; Li, Yanjie; Sun, Yuanyuan; Yang, Jiayuan; Xiong, Huahua; Zhang, Heye; Liu, Xin; Wu, Wanqing; Liang, Dong; Li, Shuo

    2018-01-01

    The motion of the common carotid artery (CCA) wall has been established to be useful in early diagnosis of atherosclerotic disease. However, tracking the CCA wall motion from ultrasound images remains a challenging task. In this paper, a nonlinear state-space approach has been developed to track CCA wall motion from ultrasound sequences. In this approach, a nonlinear state-space equation with a time-variant control signal was constructed from a mathematical model of the dynamics of the CCA wall. Then, the unscented Kalman filter (UKF) was adopted to solve the nonlinear state transfer function in order to evolve the state of the target tissue, which involves estimation of the motion trajectory of the CCA wall from noisy ultrasound images. The performance of this approach has been validated on 30 simulated ultrasound sequences and a real ultrasound dataset of 103 subjects by comparing the motion tracking results obtained in this study to those of three state-of-the-art methods and of the manual tracing method performed by two experienced ultrasound physicians. The experimental results demonstrated that the proposed approach is highly correlated with (intra-class correlation coefficient ≥ 0.9948 for the longitudinal motion and ≥ 0.9966 for the radial motion) and well agrees (the 95% confidence interval width is 0.8871 mm for the longitudinal motion and 0.4159 mm for the radial motion) with the manual tracing method on real data and also exhibits high accuracy on simulated data (0.1161 ~ 0.1260 mm). These results appear to demonstrate the effectiveness of the proposed approach for motion tracking of the CCA wall.

  16. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  17. General simulation algorithm for autocorrelated binary processes.

    PubMed

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  18. Combining protein sequence, structure, and dynamics: A novel approach for functional evolution analysis of PAS domain superfamily.

    PubMed

    Dong, Zheng; Zhou, Hongyu; Tao, Peng

    2018-02-01

    PAS domains are widespread in archaea, bacteria, and eukaryota, and play important roles in various functions. In this study, we aim to explore functional evolutionary relationship among proteins in the PAS domain superfamily in view of the sequence-structure-dynamics-function relationship. We collected protein sequences and crystal structure data from RCSB Protein Data Bank of the PAS domain superfamily belonging to three biological functions (nucleotide binding, photoreceptor activity, and transferase activity). Protein sequences were aligned and then used to select sequence-conserved residues and build phylogenetic tree. Three-dimensional structure alignment was also applied to obtain structure-conserved residues. The protein dynamics were analyzed using elastic network model (ENM) and validated by molecular dynamics (MD) simulation. The result showed that the proteins with same function could be grouped by sequence similarity, and proteins in different functional groups displayed statistically significant difference in their vibrational patterns. Interestingly, in all three functional groups, conserved amino acid residues identified by sequence and structure conservation analysis generally have a lower fluctuation than other residues. In addition, the fluctuation of conserved residues in each biological function group was strongly correlated with the corresponding biological function. This research suggested a direct connection in which the protein sequences were related to various functions through structural dynamics. This is a new attempt to delineate functional evolution of proteins using the integrated information of sequence, structure, and dynamics. © 2017 The Protein Society.

  19. G-STRATEGY: Optimal Selection of Individuals for Sequencing in Genetic Association Studies

    PubMed Central

    Wang, Miaoyan; Jakobsdottir, Johanna; Smith, Albert V.; McPeek, Mary Sara

    2017-01-01

    In a large-scale genetic association study, the number of phenotyped individuals available for sequencing may, in some cases, be greater than the study’s sequencing budget will allow. In that case, it can be important to prioritize individuals for sequencing in a way that optimizes power for association with the trait. Suppose a cohort of phenotyped individuals is available, with some subset of them possibly already sequenced, and one wants to choose an additional fixed-size subset of individuals to sequence in such a way that the power to detect association is maximized. When the phenotyped sample includes related individuals, power for association can be gained by including partial information, such as phenotype data of ungenotyped relatives, in the analysis, and this should be taken into account when assessing whom to sequence. We propose G-STRATEGY, which uses simulated annealing to choose a subset of individuals for sequencing that maximizes the expected power for association. In simulations, G-STRATEGY performs extremely well for a range of complex disease models and outperforms other strategies with, in many cases, relative power increases of 20–40% over the next best strategy, while maintaining correct type 1 error. G-STRATEGY is computationally feasible even for large datasets and complex pedigrees. We apply G-STRATEGY to data on HDL and LDL from the AGES-Reykjavik and REFINE-Reykjavik studies, in which G-STRATEGY is able to closely-approximate the power of sequencing the full sample by selecting for sequencing a only small subset of the individuals. PMID:27256766

  20. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  1. Statistical thermodynamics of protein folding: Comparison of a mean-field theory with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hao, Ming-Hong; Scheraga, Harold A.

    1995-01-01

    A comparative study of protein folding with an analytical theory and computer simulations, respectively, is reported. The theory is based on an improved mean-field formalism which, in addition to the usual mean-field approximations, takes into account the distributions of energies in the subsets of conformational states. Sequence-specific properties of proteins are parametrized in the theory by two sets of variables, one for the energetics of mean-field interactions and one for the distribution of energies. Simulations are carried out on model polypeptides with different sequences, with different chain lengths, and with different interaction potentials, ranging from strong biases towards certain local chain states (bond angles and torsional angles) to complete absence of local conformational preferences. Theoretical analysis of the simulation results for the model polypeptides reveals three different types of behavior in the folding transition from the statistical coiled state to the compact globular state; these include a cooperative two-state transition, a continuous folding, and a glasslike transition. It is found that, with the fitted theoretical parameters which are specific for each polypeptide under a different potential, the mean-field theory can describe the thermodynamic properties and folding behavior of the different polypeptides accurately. By comparing the theoretical descriptions with simulation results, we verify the basic assumptions of the theory and, thereby, obtain new insights about the folding transitions of proteins. It is found that the cooperativity of the first-order folding transition of the model polypeptides is determined mainly by long-range interactions, in particular the dipolar orientation; the local interactions (e.g., bond-angle and torsion-angle potentials) have only marginal effect on the cooperative characteristic of the folding, but have a large impact on the difference in energy between the folded lowest-energy structure and the unfolded conformations of a protein.

  2. Commonly-occurring polymorphisms in the COMT, DRD1 and DRD2 genes influence different aspects of motor sequence learning in humans.

    PubMed

    Baetu, Irina; Burns, Nicholas R; Urry, Kristi; Barbante, Girolamo Giovanni; Pitcher, Julia B

    2015-11-01

    Performing sequences of movements is a ubiquitous skill that involves dopamine transmission. However, it is unclear which components of the dopamine system contribute to which aspects of motor sequence learning. Here we used a genetic approach to investigate the relationship between different components of the dopamine system and specific aspects of sequence learning in humans. In particular, we investigated variations in genes that code for the catechol-O-methyltransferase (COMT) enzyme, the dopamine transporter (DAT) and dopamine D1 and D2 receptors (DRD1 and DRD2). COMT and the DAT regulate dopamine availability in the prefrontal cortex and the striatum, respectively, two key regions recruited during learning, whereas dopamine D1 and D2 receptors are thought to be involved in long-term potentiation and depression, respectively. We show that polymorphisms in the COMT, DRD1 and DRD2 genes differentially affect behavioral performance on a sequence learning task in 161 Caucasian participants. The DRD1 polymorphism predicted the ability to learn new sequences, the DRD2 polymorphism predicted the ability to perform a previously learnt sequence after performing interfering random movements, whereas the COMT polymorphism predicted the ability to switch flexibly between two sequences. We used computer simulations to explore potential mechanisms underlying these effects, which revealed that the DRD1 and DRD2 effects are possibly related to neuroplasticity. Our prediction-error algorithm estimated faster rates of connection strengthening in genotype groups with presumably higher D1 receptor densities, and faster rates of connection weakening in genotype groups with presumably higher D2 receptor densities. Consistent with current dopamine theories, these simulations suggest that D1-mediated neuroplasticity contributes to learning to select appropriate actions, whereas D2-mediated neuroplasticity is involved in learning to inhibit incorrect action plans. However, the learning algorithm did not account for the COMT effect, suggesting that prefrontal dopamine availability might affect sequence switching via other, non-learning, mechanisms. These findings provide insight into the function of the dopamine system, which is relevant to the development of treatments for disorders such as Parkinson's disease. Our results suggest that treatments targeting dopamine D1 receptors may improve learning of novel sequences, whereas those targeting dopamine D2 receptors may improve the ability to initiate previously learned sequences of movements. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. SU-E-J-155: Utilizing Varian TrueBeam Developer Mode for the Quantification of Mechanical Limits and the Simulation of 4D Respiratory Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moseley, D; Dave, M

    Purpose: Use Varian TrueBeam Developer mode to quantify the mechanical limits of the couch and to simulate 4D respiratory motion. Methods: An in-house MATLAB based GUI was created to make the BEAM XML files. The couch was moved in a triangular wave in the S/I direction with varying amplitudes (1mm, 5mm, 10mm, and 50mm) and periods (3s, 6s, and 9s). The periods were determined by specifying the speed. The theoretical positions were compared to the values recorded by the machine at 50 Hz. HD videos were taken for certain tests as external validation. 4D Respiratory motion was simulated by anmore » A/P MV beam being delivered while the couch moved in an elliptical manner. The ellipse had a major axis of 2 cm (S/I) and a minor axis of 1 cm (A/P). Results: The path planned by the TrueBeam deviated from the theoretical triangular form as the speed increased. Deviations were noticed starting at a speed of 3.33 cm/s (50mm amplitude, 6s period). The greatest deviation occurred in the 50mm- 3s sequence with a correlation value of −0.13 and a 27% time increase; the plan essentially became out of phase. Excluding these two, the plans had correlation values of 0.99. The elliptical sequence effectively simulated a respiratory pattern with a period of 6s. The period could be controlled by changing the speeds or the dose rate. Conclusion: The work first shows the quantification of the mechanical limits of the couch and the speeds at which the proposed plans begin to deviate. These limits must be kept in mind when programming other couch sequences. The methodology can be used to quantify the limits of other axes. Furthermore, the work shows the possibility of creating 4D respiratory simulations without using specialized phantoms or motion-platforms. This can be further developed to program patient-specific breathing patterns.« less

  4. RSRM top hat cover simulator lightning test, volume 1

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The test sequence was to measure electric and magnetic fields induced inside a redesigned solid rocket motor case when a simulated lightning discharge strikes an exposed top hat cover simulator. The test sequence was conducted between 21 June and 17 July 1990. Thirty-six high rate-of-rise Marx generator discharges and eight high current bank discharges were injected onto three different test article configurations. Attach points included three locations on the top hat cover simulator and two locations on the mounting bolts. Top hat cover simulator and mounting bolt damage and grain cover damage was observed. Overall electric field levels were well below 30 kilowatts/meter. Electric field levels ranged from 184.7 to 345.9 volts/meter and magnetic field levels were calculated from 6.921 to 39.73 amperes/meter. It is recommended that the redesigned solid rocket motor top hat cover be used in Configuration 1 or Configuration 2 as an interim lightning protection device until a lightweight cover can be designed.

  5. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  6. Effect of Lamina Thickness of Prepreg on the Surface Accuracy of Carbon Fiber Composite Space Mirrors

    NASA Astrophysics Data System (ADS)

    Yang, Zhiyong; Tang, Zhanwen; Xie, Yongjie; Shi, Hanqiao; Zhang, Boming; Guo, Hongjun

    2018-02-01

    Composite space mirror can completely replicate the high-precision surface of mould by replication process, but the actual surface accuracy of the replication composite mirror always decreases. Lamina thickness of prepreg affects the layers and layup sequence of composite space mirror, and which would affect surface accuracy of space mirror. In our research, two groups of contrasting cases through finite element analyses (FEA) and comparative experiments were studied; the effect of different lamina thicknesses of prepreg and corresponding lay-up sequences was focused as well. We describe a special analysis model, validated process and result analysis. The simulated and measured surface figures both get the same conclusion. Reducing lamina thickness of prepreg used in replicating composite space mirror is propitious to optimal design of layup sequence for fabricating composite mirror, and could improve its surface accuracy.

  7. An improved stochastic fractal search algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Sun, Chuan; Wang, Bin; Wang, Xiaojun

    2018-05-03

    Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums.

  8. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  9. A Statistical Framework for the Functional Analysis of Metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharon, Itai; Pati, Amrita; Markowitz, Victor

    2008-10-01

    Metagenomic studies consider the genetic makeup of microbial communities as a whole, rather than their individual member organisms. The functional and metabolic potential of microbial communities can be analyzed by comparing the relative abundance of gene families in their collective genomic sequences (metagenome) under different conditions. Such comparisons require accurate estimation of gene family frequencies. They present a statistical framework for assessing these frequencies based on the Lander-Waterman theory developed originally for Whole Genome Shotgun (WGS) sequencing projects. They also provide a novel method for assessing the reliability of the estimations which can be used for removing seemingly unreliable measurements.more » They tested their method on a wide range of datasets, including simulated genomes and real WGS data from sequencing projects of whole genomes. Results suggest that their framework corrects inherent biases in accepted methods and provides a good approximation to the true statistics of gene families in WGS projects.« less

  10. Higher criticism approach to detect rare variants using whole genome sequencing data

    PubMed Central

    2014-01-01

    Because of low statistical power of single-variant tests for whole genome sequencing (WGS) data, the association test for variant groups is a key approach for genetic mapping. To address the features of sparse and weak genetic effects to be detected, the higher criticism (HC) approach has been proposed and theoretically has proven optimal for detecting sparse and weak genetic effects. Here we develop a strategy to apply the HC approach to WGS data that contains rare variants as the majority. By using Genetic Analysis Workshop 18 "dose" genetic data with simulated phenotypes, we assess the performance of HC under a variety of strategies for grouping variants and collapsing rare variants. The HC approach is compared with the minimal p-value method and the sequence kernel association test. The results show that the HC approach is preferred for detecting weak genetic effects. PMID:25519367

  11. Research on Image Encryption Based on DNA Sequence and Chaos Theory

    NASA Astrophysics Data System (ADS)

    Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin

    2018-04-01

    Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.

  12. Streamwise-Localized Solutions with natural 1-fold symmetry

    NASA Astrophysics Data System (ADS)

    Altmeyer, Sebastian; Willis, Ashley; Hof, Björn

    2014-11-01

    It has been proposed in recent years that turbulence is organized around unstable invariant solutions, which provide the building blocks of the chaotic dynamics. In direct numerical simulations of pipe flow we show that when imposing a minimal symmetry constraint (reflection in an axial plane only) the formation of turbulence can indeed be explained by dynamical systems concepts. The hypersurface separating laminar from turbulent motion, the edge of turbulence, is spanned by the stable manifolds of an exact invariant solution, a periodic orbit of a spatially localized structure. The turbulent states themselves (turbulent puffs in this case) are shown to arise in a bifurcation sequence from a related localized solution (the upper branch orbit). The rather complex bifurcation sequence involves secondary Hopf bifurcations, frequency locking and a period doubling cascade until eventually turbulent puffs arise. In addition we report preliminary results of the transition sequence for pipe flow without symmetry constraints.

  13. Computational and experimental analysis of DNA shuffling

    PubMed Central

    Maheshri, Narendra; Schaffer, David V.

    2003-01-01

    We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764

  14. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  15. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Direct Numerical Simulation of Acoustic Waves Interacting with a Shock Wave in a Quasi-1D Convergent-Divergent Nozzle Using an Unstructured Finite Volume Algorithm

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.; Mankbadi, Reda R.

    1995-01-01

    Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.

  17. Ground Contact Model for Mars Science Laboratory Mission Simulations

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Way, David

    2012-01-01

    The Program to Optimize Simulated Trajectories II (POST 2) has been successful in simulating the flight of launch vehicles and entry bodies on earth and other planets. POST 2 has been the primary simulation tool for the Entry Descent, and Landing (EDL) phase of numerous Mars lander missions such as Mars Pathfinder in 1997, the twin Mars Exploration Rovers (MER-A and MER-B) in 2004, Mars Phoenix lander in 2007, and it is now the main trajectory simulation tool for Mars Science Laboratory (MSL) in 2012. In all previous missions, the POST 2 simulation ended before ground impact, and a tool other than POST 2 simulated landing dynamics. It would be ideal for one tool to simulate the entire EDL sequence, thus avoiding errors that could be introduced by handing off position, velocity, or other fight parameters from one simulation to the other. The desire to have one continuous end-to-end simulation was the motivation for developing the ground interaction model in POST 2. Rover landing, including the detection of the postlanding state, is a very critical part of the MSL mission, as the EDL landing sequence continues for a few seconds after landing. The method explained in this paper illustrates how a simple ground force interaction model has been added to POST 2, which allows simulation of the entire EDL from atmospheric entry through touchdown.

  18. Volatilisation and competing processes computed for a pesticide applied to plants in a wind tunnel system.

    PubMed

    Leistra, Minze; Wolters, André; van den Berg, Frederik

    2008-06-01

    Volatilisation of pesticides from crop canopies can be an important emission pathway. In addition to pesticide properties, competing processes in the canopy and environmental conditions play a part. A computation model is being developed to simulate the processes, but only some of the input data can be obtained directly from the literature. Three well-defined experiments on the volatilisation of radiolabelled parathion-methyl (as example compound) from plants in a wind tunnel system were simulated with the computation model. Missing parameter values were estimated by calibration against the experimental results. The resulting thickness of the air boundary layer, rate of plant penetation and rate of phototransformation were compared with a diversity of literature data. The sequence of importance of the canopy processes was: volatilisation > plant penetration > phototransformation. Computer simulation of wind tunnel experiments, with radiolabelled pesticide sprayed on plants, yields values for the rate coefficients of processes at the plant surface. As some input data for simulations are not required in the framework of registration procedures, attempts to estimate missing parameter values on the basis of divergent experimental results have to be continued. Copyright (c) 2008 Society of Chemical Industry.

  19. Nucleic acids: theory and computer simulation, Y2K.

    PubMed

    Beveridge, D L; McConnell, K J

    2000-04-01

    Molecular dynamics simulations on DNA and RNA that include solvent are now being performed under realistic environmental conditions of water activity and salt. Improvements to force-fields and treatments of long-range interactions have significantly increased the reliability of simulations. New studies of sequence effects, axis bending, solvation and conformational transitions have appeared.

  20. Investigating the Effectiveness of Computer Simulations for Chemistry Learning

    ERIC Educational Resources Information Center

    Plass, Jan L.; Milne, Catherine; Homer, Bruce D.; Schwartz, Ruth N.; Hayward, Elizabeth O.; Jordan, Trace; Verkuilen, Jay; Ng, Florrie; Wang, Yan; Barrientos, Juan

    2012-01-01

    Are well-designed computer simulations an effective tool to support student understanding of complex concepts in chemistry when integrated into high school science classrooms? We investigated scaling up the use of a sequence of simulations of kinetic molecular theory and associated topics of diffusion, gas laws, and phase change, which we designed…

  1. Intrusion detection system using Online Sequence Extreme Learning Machine (OS-ELM) in advanced metering infrastructure of smart grid.

    PubMed

    Li, Yuancheng; Qiu, Rixuan; Jing, Sitong

    2018-01-01

    Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can't satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy.

  2. A novel chaotic based image encryption using a hybrid model of deoxyribonucleic acid and cellular automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Sadaei, Hossein Javedani; Abdullah, Abdul Hanan; Lee, Malrey; Isnin, Ismail Fauzi

    2015-08-01

    Currently, there are many studies have conducted on developing security of the digital image in order to protect such data while they are sending on the internet. This work aims to propose a new approach based on a hybrid model of the Tinkerbell chaotic map, deoxyribonucleic acid (DNA) and cellular automata (CA). DNA rules, DNA sequence XOR operator and CA rules are used simultaneously to encrypt the plain-image pixels. To determine rule number in DNA sequence and also CA, a 2-dimension Tinkerbell chaotic map is employed. Experimental results and computer simulations, both confirm that the proposed scheme not only demonstrates outstanding encryption, but also resists various typical attacks.

  3. Open-Source Sequence Clustering Methods Improve the State Of the Art.

    PubMed

    Kopylova, Evguenia; Navas-Molina, Jose A; Mercier, Céline; Xu, Zhenjiang Zech; Mahé, Frédéric; He, Yan; Zhou, Hong-Wei; Rognes, Torbjørn; Caporaso, J Gregory; Knight, Rob

    2016-01-01

    Sequence clustering is a common early step in amplicon-based microbial community analysis, when raw sequencing reads are clustered into operational taxonomic units (OTUs) to reduce the run time of subsequent analysis steps. Here, we evaluated the performance of recently released state-of-the-art open-source clustering software products, namely, OTUCLUST, Swarm, SUMACLUST, and SortMeRNA, against current principal options (UCLUST and USEARCH) in QIIME, hierarchical clustering methods in mothur, and USEARCH's most recent clustering algorithm, UPARSE. All the latest open-source tools showed promising results, reporting up to 60% fewer spurious OTUs than UCLUST, indicating that the underlying clustering algorithm can vastly reduce the number of these derived OTUs. Furthermore, we observed that stringent quality filtering, such as is done in UPARSE, can cause a significant underestimation of species abundance and diversity, leading to incorrect biological results. Swarm, SUMACLUST, and SortMeRNA have been included in the QIIME 1.9.0 release. IMPORTANCE Massive collections of next-generation sequencing data call for fast, accurate, and easily accessible bioinformatics algorithms to perform sequence clustering. A comprehensive benchmark is presented, including open-source tools and the popular USEARCH suite. Simulated, mock, and environmental communities were used to analyze sensitivity, selectivity, species diversity (alpha and beta), and taxonomic composition. The results demonstrate that recent clustering algorithms can significantly improve accuracy and preserve estimated diversity without the application of aggressive filtering. Moreover, these tools are all open source, apply multiple levels of multithreading, and scale to the demands of modern next-generation sequencing data, which is essential for the analysis of massive multidisciplinary studies such as the Earth Microbiome Project (EMP) (J. A. Gilbert, J. K. Jansson, and R. Knight, BMC Biol 12:69, 2014, http://dx.doi.org/10.1186/s12915-014-0069-1).

  4. Fatigue-test acceleration with flight-by-flight loading and heating to simulate supersonic-transport operation

    NASA Technical Reports Server (NTRS)

    Imig, L. A.; Garrett, L. E.

    1973-01-01

    Possibilities for reducing fatigue-test time for supersonic-transport materials and structures were studied in tests with simulated flight-by-flight loading. In order to determine whether short-time tests were feasible, the results of accelerated tests (2 sec per flight) were compared with the results of real-time tests (96 min per flight). The effects of design mean stress, the stress range for ground-air-ground cycles, simulated thermal stress, the number of stress cycles in each flight, and salt corrosion were studied. The flight-by-flight stress sequences were applied to notched sheet specimens of Ti-8Al-1Mo-1V and Ti-6Al-4V titanium alloys. A linear cumulative-damage analysis accounted for large changes in stress range of the simulated flights but did not account for the differences between real-time and accelerated tests. The fatigue lives from accelerated tests were generally within a factor of two of the lives from real-time tests; thus, within the scope of the investigation, accelerated testing seems feasible.

  5. Caseinophosphopeptides released after tryptic hydrolysis versus simulated gastrointestinal digestion of a casein-derived by-product.

    PubMed

    Cruz-Huerta, E; García-Nebot, M J; Miralles, B; Recio, I; Amigo, L

    2015-02-01

    The production of caseinophosphopeptides from a casein-derived by-product generated during the manufacture of a functional ingredient based on antihypertensive peptides was attempted. The casein by-product was submitted to tryptic hydrolysis for 30, 60 and 120min and further precipitated with calcium chloride and ethanol at pH 4.0, 6.0 and 8.0. Identification and semi quantification of the derived products by tandem mass spectrometry revealed some qualitative and quantitative changes in the released caseinophosphopeptides over time at the different precipitation pHs. The by-product was also subjected to simulated gastrointestinal digestion. Comparison of the resulting peptides showed large sequence homology in the phosphopeptides released by tryptic hydrolysis and simulated gastrointestinal digestion. Some regions, specifically αS1-CN 43-59, αS1-CN 60-74, β-CN 1-25 and β-CN 30-50 showed resistance to both tryptic hydrolysis and simulated digestion. The results of the present study suggest that this casein-derived by-product can be used as a source of CPPs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. On the red giant titanium oxide bands

    NASA Astrophysics Data System (ADS)

    Hanni, L.; Sitska, J.

    1985-12-01

    The dependence of TiO absorption in cool oxygen-sequence giant stars on the Teff and log g of their atmospheres is investigated theoretically on the basis of spectra simulated using the computer program described by Hanni (1983) and the giant model atmospheres of Johnson et al. (1980). The temperature dependence of the intensity jumps at the head of the alpha(1.0) band is determined from simulated spectra, and the jumps are related to spectral types using the calibration of Ridgway et al. (1980). The results are presented in tables and graphs and shown to be in good agreement with the empirical Teff/intensity-jump correlation of Boyarchuk (1969).

  7. BIRD: A general interface for sparse distributed memory simulators

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1990-01-01

    Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.

  8. Conformational dynamics of a short antigenic peptide in its free and antibody bound forms gives insight into the role of β-turns in peptide immunogenicity.

    PubMed

    Shukla, Rashmi Tambe; Sasidhar, Yellamraju U

    2015-07-01

    Earlier immunological experiments with a synthetic 36-residue peptide (75-110) from Influenza hemagglutinin have been shown to elicit anti-peptide antibodies (Ab) which could cross-react with the parent protein. In this article, we have studied the conformational features of a short antigenic (Ag) peptide ((98)YPYDVPDYASLRS(110)) from Influenza hemagglutinin in its free and antibody (Ab) bound forms with molecular dynamics simulations using GROMACS package and OPLS-AA/L all-atom force field at two different temperatures (293 K and 310 K). Multiple simulations for the free Ag peptide show sampling of ordered conformations and suggest different conformational preferences of the peptide at the two temperatures. The free Ag samples a conformation crucial for Ab binding (β-turn formed by "DYAS" sequence) with greater preference at 310 K while, it samples a native-like conformation with relatively greater propensity at 293 K. The sequence "DYAS" samples β-turn conformation with greater propensity at 310 K as part of the hemagglutinin protein also. The bound Ag too samples the β-turn involving "DYAS" sequence and in addition it also samples a β-turn formed by the sequence "YPYD" at its N-terminus, which seems to be induced upon binding to the Ab. Further, the bound Ag displays conformational flexibility at both 293 K and 310 K, particularly at terminal residues. The implications of these results for peptide immunogenicity and Ag-Ab recognition are discussed. © 2015 Wiley Periodicals, Inc.

  9. Cell genealogies in a plant meristem deduced with the aid of a 'bootstrap' L-system.

    PubMed

    Lück, J; Barlow, P W; Lück, H B

    1994-01-01

    The primary root meristem of maize (Zea mays L.) contains longitudinal files of cells arranged in groups of familial descent (sisters, cousins, etc.). These groups, or packets, show ordered sequences of cell division which are transverse with respect to the apico-basal axis of the root. The sequences have been analysed in three zones of the meristem during the course of the first four cell generations following germination. In this period, the number of cells in the packets increases from one to 16. Theoretically, there are 48 possible division pathways that lead to the eight-cell stage, and nearly 2 x 10(6) that lead to the 16-cell stage. However, analysis shows that only a few of all the possible pathways are used in any particular zone of the root. This restriction of pathways results from inherited sequences of asymmetric cell divisions which lead to sister cells of unequal length. All possible division pathways can be generated by deterministic 'bootstrap' L-systems which assign different lifespans to sister cells of successive generations and hence specify their subsequent sequence of divisions. These systems simulate propagating patterns of cell divisions which agree with those actually found within the growing packets that comprise the root meristem. The patterns of division are specific to cells originating in various regions of the meristem of the germinating root. The importance of such systems is that they simulate patterns of cellular proliferation where there is ancestral dependency. They can therefore be applied in other growing and proliferating systems where this is suspected.

  10. Haplotype estimation using sequencing reads.

    PubMed

    Delaneau, Olivier; Howie, Bryan; Cox, Anthony J; Zagury, Jean-François; Marchini, Jonathan

    2013-10-03

    High-throughput sequencing technologies produce short sequence reads that can contain phase information if they span two or more heterozygote genotypes. This information is not routinely used by current methods that infer haplotypes from genotype data. We have extended the SHAPEIT2 method to use phase-informative sequencing reads to improve phasing accuracy. Our model incorporates the read information in a probabilistic model through base quality scores within each read. The method is primarily designed for high-coverage sequence data or data sets that already have genotypes called. One important application is phasing of single samples sequenced at high coverage for use in medical sequencing and studies of rare diseases. Our method can also use existing panels of reference haplotypes. We tested the method by using a mother-father-child trio sequenced at high-coverage by Illumina together with the low-coverage sequence data from the 1000 Genomes Project (1000GP). We found that use of phase-informative reads increases the mean distance between switch errors by 22% from 274.4 kb to 328.6 kb. We also used male chromosome X haplotypes from the 1000GP samples to simulate sequencing reads with varying insert size, read length, and base error rate. When using short 100 bp paired-end reads, we found that using mixtures of insert sizes produced the best results. When using longer reads with high error rates (5-20 kb read with 4%-15% error per base), phasing performance was substantially improved. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  11. Bacterial population dynamics during the ensiling of Medicago sativa (alfalfa) and subsequent exposure to air.

    PubMed

    McGarvey, J A; Franco, R B; Palumbo, J D; Hnasko, R; Stanker, L; Mitloehner, F M

    2013-06-01

    To describe, at high resolution, the bacterial population dynamics and chemical transformations during the ensiling of alfalfa and subsequent exposure to air. Samples of alfalfa, ensiled alfalfa and silage exposed to air were collected and their bacterial population structures compared using 16S rRNA gene libraries containing approximately 1900 sequences each. Cultural and chemical analyses were also performed to complement the 16S gene sequence data. Sequence analysis revealed significant differences (P < 0·05) in the bacterial populations at each time point. The alfalfa-derived library contained mostly sequences associated with the Gammaproteobacteria (including the genera: Enterobacter, Erwinia and Pantoea); the ensiled material contained mostly sequences associated with the lactic acid bacteria (LAB) (including the genera: Lactobacillus, Pediococcus and Lactococcus). Exposure to air resulted in even greater percentages of LAB, especially among the genus Lactobacillus, and a significant drop in bacterial diversity. In-depth 16S rRNA gene sequence analysis revealed significant bacterial population structure changes during ensiling and again during exposure to air. This in-depth description of the bacterial population dynamics that occurred during ensiling and simulated feed out expands our knowledge of these processes. © 2013 The Society for Applied Microbiology No claim to US Government works.

  12. Internally generated hippocampal sequences as a vantage point to probe future-oriented cognition.

    PubMed

    Pezzulo, Giovanni; Kemere, Caleb; van der Meer, Matthijs A A

    2017-05-01

    Information processing in the rodent hippocampus is fundamentally shaped by internally generated sequences (IGSs), expressed during two different network states: theta sequences, which repeat and reset at the ∼8 Hz theta rhythm associated with active behavior, and punctate sharp wave-ripple (SWR) sequences associated with wakeful rest or slow-wave sleep. A potpourri of diverse functional roles has been proposed for these IGSs, resulting in a fragmented conceptual landscape. Here, we advance a unitary view of IGSs, proposing that they reflect an inferential process that samples a policy from the animal's generative model, supported by hippocampus-specific priors. The same inference affords different cognitive functions when the animal is in distinct dynamical modes, associated with specific functional networks. Theta sequences arise when inference is coupled to the animal's action-perception cycle, supporting online spatial decisions, predictive processing, and episode encoding. SWR sequences arise when the animal is decoupled from the action-perception cycle and may support offline cognitive processing, such as memory consolidation, the prospective simulation of spatial trajectories, and imagination. We discuss the empirical bases of this proposal in relation to rodent studies and highlight how the proposed computational principles can shed light on the mechanisms of future-oriented cognition in humans. © 2017 New York Academy of Sciences.

  13. BlackOPs: increasing confidence in variant detection through mappability filtering.

    PubMed

    Cabanski, Christopher R; Wilkerson, Matthew D; Soloway, Matthew; Parker, Joel S; Liu, Jinze; Prins, Jan F; Marron, J S; Perou, Charles M; Hayes, D Neil

    2013-10-01

    Identifying variants using high-throughput sequencing data is currently a challenge because true biological variants can be indistinguishable from technical artifacts. One source of technical artifact results from incorrectly aligning experimentally observed sequences to their true genomic origin ('mismapping') and inferring differences in mismapped sequences to be true variants. We developed BlackOPs, an open-source tool that simulates experimental RNA-seq and DNA whole exome sequences derived from the reference genome, aligns these sequences by custom parameters, detects variants and outputs a blacklist of positions and alleles caused by mismapping. Blacklists contain thousands of artifact variants that are indistinguishable from true variants and, for a given sample, are expected to be almost completely false positives. We show that these blacklist positions are specific to the alignment algorithm and read length used, and BlackOPs allows users to generate a blacklist specific to their experimental setup. We queried the dbSNP and COSMIC variant databases and found numerous variants indistinguishable from mapping errors. We demonstrate how filtering against blacklist positions reduces the number of potential false variants using an RNA-seq glioblastoma cell line data set. In summary, accounting for mapping-caused variants tuned to experimental setups reduces false positives and, therefore, improves genome characterization by high-throughput sequencing.

  14. Using Molecular Dynamics Simulations as an Aid in the Prediction of Domain Swapping of Computationally Designed Protein Variants.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Thomas, Leonard M; Mayo, Stephen L

    2015-08-14

    In standard implementations of computational protein design, a positive-design approach is used to predict sequences that will be stable on a given backbone structure. Possible competing states are typically not considered, primarily because appropriate structural models are not available. One potential competing state, the domain-swapped dimer, is especially compelling because it is often nearly identical with its monomeric counterpart, differing by just a few mutations in a hinge region. Molecular dynamics (MD) simulations provide a computational method to sample different conformational states of a structure. Here, we tested whether MD simulations could be used as a post-design screening tool to identify sequence mutations leading to domain-swapped dimers. We hypothesized that a successful computationally designed sequence would have backbone structure and dynamics characteristics similar to that of the input structure and that, in contrast, domain-swapped dimers would exhibit increased backbone flexibility and/or altered structure in the hinge-loop region to accommodate the large conformational change required for domain swapping. While attempting to engineer a homodimer from a 51-amino-acid fragment of the monomeric protein engrailed homeodomain (ENH), we had instead generated a domain-swapped dimer (ENH_DsD). MD simulations on these proteins showed increased B-factors derived from MD simulation in the hinge loop of the ENH_DsD domain-swapped dimer relative to monomeric ENH. Two point mutants of ENH_DsD designed to recover the monomeric fold were then tested with an MD simulation protocol. The MD simulations suggested that one of these mutants would adopt the target monomeric structure, which was subsequently confirmed by X-ray crystallography. Copyright © 2015. Published by Elsevier Ltd.

  15. Understanding the structural and dynamic consequences of DNA epigenetic modifications: Computational insights into cytosine methylation and hydroxymethylation

    PubMed Central

    Carvalho, Alexandra T P; Gouveia, Leonor; Kanna, Charan Raju; Wärmländer, Sebastian K T S; Platts, Jamie A; Kamerlin, Shina Caroline Lynn

    2014-01-01

    We report a series of molecular dynamics (MD) simulations of up to a microsecond combined simulation time designed to probe epigenetically modified DNA sequences. More specifically, by monitoring the effects of methylation and hydroxymethylation of cytosine in different DNA sequences, we show, for the first time, that DNA epigenetic modifications change the molecule's dynamical landscape, increasing the propensity of DNA toward different values of twist and/or roll/tilt angles (in relation to the unmodified DNA) at the modification sites. Moreover, both the extent and position of different modifications have significant effects on the amount of structural variation observed. We propose that these conformational differences, which are dependent on the sequence environment, can provide specificity for protein binding. PMID:25625845

  16. Coded spread spectrum digital transmission system design study

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.

    1974-01-01

    Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.

  17. Accelerated Radiation-Damping for Increased Spin Equilibrium (ARISE)

    PubMed Central

    Huang, Susie Y.; Witzel, Thomas; Wald, Lawrence L.

    2008-01-01

    Control of the longitudinal magnetization in fast gradient echo sequences is an important factor enabling the high efficiency of balanced Steady State Free Precession (bSSFP) sequences. We introduce a new method for accelerating the return of the longitudinal magnetization to the +z-axis that is independent of externally applied RF pulses and shows improved off-resonance performance. The Accelerated Radiation damping for Increased Spin Equilibrium (ARISE) method uses an external feedback circuit to strengthen the Radiation Damping (RD) field. The enhanced RD field rotates the magnetization back to the +z-axis at a rate faster than T1 relaxation. The method is characterized in gradient echo phantom imaging at 3T as a function of feedback gain, phase, and duration and compared with results from numerical simulations of the Bloch equations incorporating RD. A short period of feedback (10ms) during a refocused interval of a crushed gradient echo sequence allowed greater than 99% recovery of the longitudinal magnetization when very little T2 relaxation has time to occur. Appropriate applications might include improving navigated sequences. Unlike conventional flip-back schemes, the ARISE “flip-back” is generated by the spins themselves, thereby offering a potentially useful building block for enhancing gradient echo sequences. PMID:18956463

  18. DNA tetrominoes: the construction of DNA nanostructures using self-organised heterogeneous deoxyribonucleic acids shapes.

    PubMed

    Ong, Hui San; Rahim, Mohd Syafiq; Firdaus-Raih, Mohd; Ramlan, Effirul Ikhwan

    2015-01-01

    The unique programmability of nucleic acids offers alternative in constructing excitable and functional nanostructures. This work introduces an autonomous protocol to construct DNA Tetris shapes (L-Shape, B-Shape, T-Shape and I-Shape) using modular DNA blocks. The protocol exploits the rich number of sequence combinations available from the nucleic acid alphabets, thus allowing for diversity to be applied in designing various DNA nanostructures. Instead of a deterministic set of sequences corresponding to a particular design, the protocol promotes a large pool of DNA shapes that can assemble to conform to any desired structures. By utilising evolutionary programming in the design stage, DNA blocks are subjected to processes such as sequence insertion, deletion and base shifting in order to enrich the diversity of the resulting shapes based on a set of cascading filters. The optimisation algorithm allows mutation to be exerted indefinitely on the candidate sequences until these sequences complied with all the four fitness criteria. Generated candidates from the protocol are in agreement with the filter cascades and thermodynamic simulation. Further validation using gel electrophoresis indicated the formation of the designed shapes. Thus, supporting the plausibility of constructing DNA nanostructures in a more hierarchical, modular, and interchangeable manner.

  19. Reconstructing genealogies of serial samples under the assumption of a molecular clock using serial-sample UPGMA.

    PubMed

    Drummond, A; Rodrigo, A G

    2000-12-01

    Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.

  20. Sequence Dependencies of DNA Deformability and Hydration in the Minor Groove

    PubMed Central

    Yonetani, Yoshiteru; Kono, Hidetoshi

    2009-01-01

    Abstract DNA deformability and hydration are both sequence-dependent and are essential in specific DNA sequence recognition by proteins. However, the relationship between the two is not well understood. Here, systematic molecular dynamics simulations of 136 DNA sequences that differ from each other in their central tetramer revealed that sequence dependence of hydration is clearly correlated with that of deformability. We show that this correlation can be illustrated by four typical cases. Most rigid basepair steps are highly likely to form an ordered hydration pattern composed of one water molecule forming a bridge between the bases of distinct strands, but a few exceptions favor another ordered hydration composed of two water molecules forming such a bridge. Steps with medium deformability can display both of these hydration patterns with frequent transition. Highly flexible steps do not have any stable hydration pattern. A detailed picture of this correlation demonstrates that motions of hydration water molecules and DNA bases are tightly coupled with each other at the atomic level. These results contribute to our understanding of the entropic contribution from water molecules in protein or drug binding and could be applied for the purpose of predicting binding sites. PMID:19686662

  1. Improving the time efficiency of the Fourier synthesis method for slice selection in magnetic resonance imaging.

    PubMed

    Tahayori, B; Khaneja, N; Johnston, L A; Farrell, P M; Mareels, I M Y

    2016-01-01

    The design of slice selective pulses for magnetic resonance imaging can be cast as an optimal control problem. The Fourier synthesis method is an existing approach to solve these optimal control problems. In this method the gradient field as well as the excitation field are switched rapidly and their amplitudes are calculated based on a Fourier series expansion. Here, we provide a novel insight into the Fourier synthesis method via representing the Bloch equation in spherical coordinates. Based on the spherical Bloch equation, we propose an alternative sequence of pulses that can be used for slice selection which is more time efficient compared to the original method. Simulation results demonstrate that while the performance of both methods is approximately the same, the required time for the proposed sequence of pulses is half of the original sequence of pulses. Furthermore, the slice selectivity of both sequences of pulses changes with radio frequency field inhomogeneities in a similar way. We also introduce a measure, referred to as gradient complexity, to compare the performance of both sequences of pulses. This measure indicates that for a desired level of uniformity in the excited slice, the gradient complexity for the proposed sequence of pulses is less than the original sequence. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. SEQUENCING of TSUNAMI WAVES: Why the first wave is not always the largest?

    NASA Astrophysics Data System (ADS)

    Synolakis, C.; Okal, E.

    2016-12-01

    We discuss what contributes to the `sequencing' of tsunami waves in the far field, that is, to the distribution of the maximum sea surface amplitude inside the dominant wave packet constituting the primary arrival at a distant harbour. Based on simple models of sources for which analytical solutions are available, we show that, as range is increased, the wave pattern evolves from a regime of maximum amplitude in the first oscillation to one of delayed maximum, where the largest amplitude takes place during a subsequent oscillation. In the case of the simple, instantaneous uplift of a circular disk at the surface of an ocean of constant depth, the critical distance for transition between those patterns scales as r 30 /h2 where r0 is the radius of the disk and h the depth of the ocean. This behaviour is explained from simple arguments based on a model where sequencing results from frequency dispersion in the primary wave packet, as the width of its spectrum around its dominant period T0 becomes dispersed in time in an amount comparable to T0 , the latter being controlled by a combination of source size and ocean depth. The general concepts in this model are confirmed in the case of more realistic sources for tsunami excitation by a finite-time deformation of the ocean floor, as well as in real-life simulations of tsunamis excited by large subduction events, for which we find that the influence of fault width on the distribution of sequencing is more important than that of fault length. Finally, simulation of the major events of Chile (2010) and Japan (2011) at large arrays of virtual gauges in the Pacific Basin correctly predicts the majority of the sequencing patterns observed on DART buoys during these events. By providing insight into the evolution with time of wave amplitudes inside primary wave packets for far field tsunamis generated by large earthquakes, our results stress the importance, for civil defense authorities, of issuing warning and evacuation orders of sufficient duration to avoid the hazard

  3. Sequencing of tsunami waves: why the first wave is not always the largest

    NASA Astrophysics Data System (ADS)

    Okal, Emile A.; Synolakis, Costas E.

    2016-02-01

    This paper examines the factors contributing to the `sequencing' of tsunami waves in the far field, that is, to the distribution of the maximum sea surface amplitude inside the dominant wave packet constituting the primary arrival at a distant harbour. Based on simple models of sources for which analytical solutions are available, we show that, as range is increased, the wave pattern evolves from a regime of maximum amplitude in the first oscillation to one of delayed maximum, where the largest amplitude takes place during a subsequent oscillation. In the case of the simple, instantaneous uplift of a circular disk at the surface of an ocean of constant depth, the critical distance for transition between those patterns scales as r_0^3 / h^2 where r0 is the radius of the disk and h the depth of the ocean. This behaviour is explained from simple arguments based on a model where sequencing results from frequency dispersion in the primary wave packet, as the width of its spectrum around its dominant period T0 becomes dispersed in time in an amount comparable to T0, the latter being controlled by a combination of source size and ocean depth. The general concepts in this model are confirmed in the case of more realistic sources for tsunami excitation by a finite-time deformation of the ocean floor, as well as in real-life simulations of tsunamis excited by large subduction events, for which we find that the influence of fault width on the distribution of sequencing is more important than that of fault length. Finally, simulation of the major events of Chile (2010) and Japan (2011) at large arrays of virtual gauges in the Pacific Basin correctly predicts the majority of the sequencing patterns observed on DART buoys during these events. By providing insight into the evolution with time of wave amplitudes inside primary wave packets for far field tsunamis generated by large earthquakes, our results stress the importance, for civil defense authorities, of issuing warning and evacuation orders of sufficient duration to avoid the hazard inherent in premature calls for all-clear.

  4. Targeted isolation, sequence assembly and characterization of two white spruce (Picea glauca) BAC clones for terpenoid synthase and cytochrome P450 genes involved in conifer defence reveal insights into a conifer genome

    PubMed Central

    2009-01-01

    Background Conifers are a large group of gymnosperm trees which are separated from the angiosperms by more than 300 million years of independent evolution. Conifer genomes are extremely large and contain considerable amounts of repetitive DNA. Currently, conifer sequence resources exist predominantly as expressed sequence tags (ESTs) and full-length (FL)cDNAs. There is no genome sequence available for a conifer or any other gymnosperm. Conifer defence-related genes often group into large families with closely related members. The goals of this study are to assess the feasibility of targeted isolation and sequence assembly of conifer BAC clones containing specific genes from two large gene families, and to characterize large segments of genomic DNA sequence for the first time from a conifer. Results We used a PCR-based approach to identify BAC clones for two target genes, a terpene synthase (3-carene synthase; 3CAR) and a cytochrome P450 (CYP720B4) from a non-arrayed genomic BAC library of white spruce (Picea glauca). Shotgun genomic fragments isolated from the BAC clones were sequenced to a depth of 15.6- and 16.0-fold coverage, respectively. Assembly and manual curation yielded sequence scaffolds of 172 kbp (3CAR) and 94 kbp (CYP720B4) long. Inspection of the genomic sequences revealed the intron-exon structures, the putative promoter regions and putative cis-regulatory elements of these genes. Sequences related to transposable elements (TEs), high complexity repeats and simple repeats were prevalent and comprised approximately 40% of the sequenced genomic DNA. An in silico simulation of the effect of sequencing depth on the quality of the sequence assembly provides direction for future efforts of conifer genome sequencing. Conclusion We report the first targeted cloning, sequencing, assembly, and annotation of large segments of genomic DNA from a conifer. We demonstrate that genomic BAC clones for individual members of multi-member gene families can be isolated in a gene-specific fashion. The results of the present work provide important new information about the structure and content of conifer genomic DNA that will guide future efforts to sequence and assemble conifer genomes. PMID:19656416

  5. All APAPs Are Not Equivalent for the Treatment of Sleep Disordered Breathing: A Bench Evaluation of Eleven Commercially Available Devices

    PubMed Central

    Zhu, Kaixian; Roisman, Gabriel; Aouf, Sami; Escourrou, Pierre

    2015-01-01

    Study Objectives: This study challenged on a bench-test the efficacy of auto-titrating positive airway pressure (APAP) devices for obstructive sleep disordered breathing treatment and evaluated the accuracy of the device reports. Methods: Our bench consisted of an active lung simulator and a Starling resistor. Eleven commercially available APAP devices were evaluated on their reactions to single-type SDB sequences (obstructive apnea and hypopnea, central apnea, and snoring), and to a long general breathing scenario (5.75 h) simulating various SDB during four sleep cycles and to a short scenario (95 min) simulating one sleep cycle. Results: In the single-type sequence of 30-minute repetitive obstructive apneas, only 5 devices normalized the airflow (> 70% of baseline breathing amplitude). Similarly, normalized breathing was recorded with 8 devices only for a 20-min obstructive hypopnea sequence. Five devices increased the pressure in response to snoring. Only 4 devices maintained a constant minimum pressure when subjected to repeated central apneas with an open upper airway. In the long general breathing scenario, the pressure responses and the treatment efficacy differed among devices: only 5 devices obtained a residual obstructive AHI < 5/h. During the short general breathing scenario, only 2 devices reached the same treatment efficacy (p < 0.001), and 3 devices underestimated the AHI by > 10% (p < 0.001). The long scenario led to more consistent device reports. Conclusion: Large differences between APAP devices in the treatment efficacy and the accuracy of report were evidenced in the current study. Citation: Zhu K, Roisman G, Aouf S, Escourrou P. All APAPs are not equivalent for the treatment of sleep disordered breathing: a bench evaluation of eleven commercially available devices. J Clin Sleep Med 2015;11(7):725–734. PMID:25766708

  6. Use of Internet Resources in the Biology Lecture Classroom.

    ERIC Educational Resources Information Center

    Francis, Joseph W.

    2000-01-01

    Introduces internet resources that are available for instructional use in biology classrooms. Provides information on video-based technologies to create and capture video sequences, interactive web sites that allow interaction with biology simulations, online texts, and interactive videos that display animated video sequences. (YDS)

  7. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  8. LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Cook, K. H.; Delgado, F.; Miller, M.; Saha, A.; Allsman, R.; Pinto, P.; Gee, P. A.

    2005-12-01

    We have developed an operations simulator for LSST and used it to explore design and operations parameter space for this large etendue telescope and its ten year survey mission. The design is modular, with separate science programs coded in separate modules. There is a sophisticated telescope module with all motions parametrized for ease of testing different telescope capabilities, e.g. effect of acceleration capabilities of various motors on science output. Sky brightness is calculated as a function of moon phase and separation. A sophisticated exposure time calculator has been developed for LSST which is being incorporated into the simulator to allow specification of S/N requirements. All important parameters for the telescope, the site and the science programs are easily accessible in configuration files. Seeing and cloud data from the three candidate LSST sites are used for our simulations. The simulator has two broad categories of science proposals: sky coverage and transient events. Sky coverage proposals base their observing priorities on a required number of observations for each field in a particular filter with specified conditions (maximum seeing, sky brightness, etc) and one is used for a weak lensing investigation. Transient proposals are highly configurable. A transient proposal can require sequential, multiple exposures in various filters with a specified sequence of filters, and require a particular cadence for multiple revisits to complete an observation sequence. Each science proposal ranks potential observations based upon the internal logic of that proposal. We present the results of a variety of mixed science program observing simulations, showing how varied programs can be carried out simultaneously, with many observations serving multiple science goals. The simulator has shown that LSST can carry out its multiple missions under a variety of conditions. KHC's work was performed under the auspices of the US DOE, NNSA by the Univ. of California, LLNL under contract No. W-7405-Eng-48.

  9. The white-dwarf cooling sequence of NGC 6791: a unique tool for stellar evolution

    NASA Astrophysics Data System (ADS)

    García-Berro, E.; Torres, S.; Renedo, I.; Camacho, J.; Althaus, L. G.; Córsico, A. H.; Salaris, M.; Isern, J.

    2011-09-01

    Context. NGC 6791 is a well-studied, metal-rich open cluster that is so close to us that it can be imaged down to luminosities fainter than that of the termination of its white-dwarf cooling sequence, thus allowing for an in-depth study of its white dwarf population. Aims: White dwarfs carry important information about the history of the cluster. We use observations of the white-dwarf cooling sequence to constrain important properties of the cluster stellar population, such as the existence of a putative population of massive helium-core white dwarfs, and the properties of a large population of unresolved binary white dwarfs. We also investigate the use of white dwarfs to disclose the presence of cluster subpopulations with a different initial chemical composition, and we obtain an upper bound to the fraction of hydrogen-deficient white dwarfs. Methods: We use a Monte Carlo simulator that employs up-to-date evolutionary cooling sequences for white dwarfs with hydrogen-rich and hydrogen-deficient atmospheres, with carbon-oxygen and helium cores. The cooling sequences for carbon-oxygen cores account for the delays introduced by both 22Ne sedimentation in the liquid phase and by carbon-oxygen phase separation upon crystallization. Results: We do not find evidence for a substantial fraction of helium-core white dwarfs, and hence our results support the suggestion that the origin of the bright peak of the white-dwarf luminosity function can only be attributed to a population of unresolved binary white dwarfs. Moreover, our results indicate that if this hypothesis is at the origin of the bright peak, the number distribution of secondary masses of the population of unresolved binaries has to increase with increasing mass ratio between the secondary and primary components of the progenitor system. We also find that the observed cooling sequence appears to be able to constrain the presence of progenitor subpopulations with different chemical compositions and the fraction of hydrogen-deficient white dwarfs. Conclusions: Our simulations place interesting constraints on important characteristics of the stellar populations of NGC 6791. In particular, we find that the fraction of single helium-core white dwarfs must be smaller than 5%, that a subpopulation of stars with zero metallicity must be ≲12%, while if the adopted metallicity of the subpopulation is solar the upper limit is ~8%. Finally, we also find that the fraction of hydrogen-deficient white dwarfs in this particular cluster is surprinsingly small (≲6%).

  10. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  11. Spatiotemporal topology and temporal sequence identification with an adaptive time-delay neural network

    NASA Astrophysics Data System (ADS)

    Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.

    1993-08-01

    Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.

  12. Neutron Source Facility Training Simulator Based on EPICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Young Soo; Wei, Thomas Y.; Vilim, Richard B.

    A plant operator training simulator is developed for training the plant operators as well as for design verification of plant control system (PCS) and plant protection system (PPS) for the Kharkov Institute of Physics and Technology Neutron Source Facility. The simulator provides the operator interface for the whole plant including the sub-critical assembly coolant loop, target coolant loop, secondary coolant loop, and other facility systems. The operator interface is implemented based on Experimental Physics and Industrial Control System (EPICS), which is a comprehensive software development platform for distributed control systems. Since its development at Argonne National Laboratory, it has beenmore » widely adopted in the experimental physics community, e.g. for control of accelerator facilities. This work is the first implementation for a nuclear facility. The main parts of the operator interface are the plant control panel and plant protection panel. The development involved implementation of process variable database, sequence logic, and graphical user interface (GUI) for the PCS and PPS utilizing EPICS and related software tools, e.g. sequencer for sequence logic, and control system studio (CSS-BOY) for graphical use interface. For functional verification of the PCS and PPS, a plant model is interfaced, which is a physics-based model of the facility coolant loops implemented as a numerical computer code. The training simulator is tested and demonstrated its effectiveness in various plant operation sequences, e.g. start-up, shut-down, maintenance, and refueling. It was also tested for verification of the plant protection system under various trip conditions.« less

  13. Entropic formulation for the protein folding process: Hydrophobic stability correlates with folding rates

    NASA Astrophysics Data System (ADS)

    Dal Molin, J. P.; Caliri, A.

    2018-01-01

    Here we focus on the conformational search for the native structure when it is ruled by the hydrophobic effect and steric specificities coming from amino acids. Our main tool of investigation is a 3D lattice model provided by a ten-letter alphabet, the stereochemical model. This minimalist model was conceived for Monte Carlo (MC) simulations when one keeps in mind the kinetic behavior of protein-like chains in solution. We have three central goals here. The first one is to characterize the folding time (τ) by two distinct sampling methods, so we present two sets of 103 MC simulations for a fast protein-like sequence. The resulting sets of characteristic folding times, τ and τq were obtained by the application of the standard Metropolis algorithm (MA), as well as by an enhanced algorithm (Mq A). The finding for τq shows two things: (i) the chain-solvent hydrophobic interactions {hk } plus a set of inter-residues steric constraints {ci,j } are able to emulate the conformational search for the native structure. For each one of the 103MC performed simulations, the target is always found within a finite time window; (ii) the ratio τq / τ ≅ 1 / 10 suggests that the effect of local thermal fluctuations, encompassed by the Tsallis weight, provides to the chain an innate efficiency to escape from energetic and steric traps. We performed additional MC simulations with variations of our design rule to attest this first result, both algorithms the MA and the Mq A were applied to a restricted set of targets, a physical insight is provided. Our second finding was obtained by a set of 600 independent MC simulations, only performed with the Mq A applied to an extended set of 200 representative targets, our native structures. The results show how structural patterns should modulate τq, which cover four orders of magnitude; this finding is our second goal. The third, and last result, was obtained with a special kind of simulation performed with the purpose to explore a possible connection between the hydrophobic component of protein stability and the native structural topology. We simulated those same 200 targets again with the Mq A, only. However, this time we evaluated the relative frequency {ϕq } in which each target visits its corresponding native structure along an appropriate simulation time. Due to the presence of the hydrophobic effect in our approach we obtained a strong correlation between the stability and the folding rate (R = 0 . 85). So, as faster a sequence found its target, as larger is the hydrophobic component of its stability. The strong correlation fulfills our last goal. This final finding suggests that the hydrophobic effect could not be a general stabilizing factor for proteins.

  14. MPS Editor

    NASA Technical Reports Server (NTRS)

    Mathews, William S.; Liu, Ning; Francis, Laurie K.; OReilly, Taifun L.; Schrock, Mitchell; Page, Dennis N.; Morris, John R.; Joswig, Joseph C.; Crockett, Thomas M.; Shams, Khawaja S.

    2011-01-01

    Previously, it was time-consuming to hand-edit data and then set up simulation runs to find the effect and impact of the input data on a spacecraft. MPS Editor provides the user the capability to create/edit/update models and sequences, and immediately try them out using what appears to the user as one piece of software. MPS Editor provides an integrated sequencing environment for users. It provides them with software that can be utilized during development as well as actual operations. In addition, it provides them with a single, consistent, user friendly interface. MPS Editor uses the Eclipse Rich Client Platform to provide an environment that can be tailored to specific missions. It provides the capability to create and edit, and includes an Activity Dictionary to build the simulation spacecraft models, build and edit sequences of commands, and model the effects of those commands on the spacecraft. MPS Editor is written in Java using the Eclipse Rich Client Platform. It is currently built with four perspectives: the Activity Dictionary Perspective, the Project Adaptation Perspective, the Sequence Building Perspective, and the Sequence Modeling Perspective. Each perspective performs a given task. If a mission doesn't require that task, the unneeded perspective is not added to that project's delivery. In the Activity Dictionary Perspective, the user builds the project-specific activities, observations, calibrations, etc. Typically, this is used during the development phases of the mission, although it can be used later to make changes and updates to the Project Activity Dictionary. In the Adaptation Perspective, the user creates the spacecraft models such as power, data store, etc. Again, this is typically used during development, but will be used to update or add models of the spacecraft. The Sequence Building Perspective allows the user to create a sequence of activities or commands that go to the spacecraft. It provides a simulation of the activities and commands that have been created.

  15. Mean-field approaches to the totally asymmetric exclusion process with quenched disorder and large particles

    NASA Astrophysics Data System (ADS)

    Shaw, Leah B.; Sethna, James P.; Lee, Kelvin H.

    2004-08-01

    The process of protein synthesis in biological systems resembles a one-dimensional driven lattice gas in which the particles (ribosomes) have spatial extent, covering more than one lattice site. Realistic, nonuniform gene sequences lead to quenched disorder in the particle hopping rates. We study the totally asymmetric exclusion process with large particles and quenched disorder via several mean-field approaches and compare the mean-field results with Monte Carlo simulations. Mean-field equations obtained from the literature are found to be reasonably effective in describing this system. A numerical technique is developed for computing the particle current rapidly. The mean-field approach is extended to include two-point correlations between adjacent sites. The two-point results are found to match Monte Carlo simulations more closely.

  16. Simulations of Karenia Brevis on the West Florida Shelf

    NASA Astrophysics Data System (ADS)

    Lenes, J. M.; Darrow, B. P.; Chen, F. R.; Walsh, J. J.; Dieterle, D. A.; Weisberg, R. H.

    2010-12-01

    The ecological model, HABSIM, was developed and tested to examine the initiation and maintenance of red tides of the toxic dinoflagellate Karenia brevis on the West Florida shelf (WFS). Phytoplankton competition among K. brevis, nitrogen fixing cyanophytes (Trichodesmium spp.), large siliceous phytoplankton (diatoms), and small non-siliceous phytoplankton (microflagellates) thus explores the sequence of events required to support the observed bloom from August to December 2001. The ecological model contains twenty two state variables within four submodels: atmospheric (iron deposition), biological (phytoplankton, bacteria, zooplankton, and fish), chemical (multiple species of carbon, nitrogen, phosphorus, silica, and iron), and benthic (nutrient regeneration). Here, we present results for the 2001 1-d hindcast simulations, with and without data assimilation of the Karenia state variable, as well as preliminary 3-d results.

  17. Propagation dynamics of successive emissions in laboratory and astrophysical jets and problem of their collimation

    NASA Astrophysics Data System (ADS)

    Kalashnikov, I.; Chardonnet, P.; Chechetkin, V.; Dodin, A.; Krauz, V.

    2018-06-01

    This paper presents the results of numerical simulation of the propagation of a sequence of plasma knots in laboratory conditions and in the astrophysical environment. The physical and geometric parameters of the simulation have been chosen close to the parameters of the PF-3 facility (Kurchatov Institute) and the jet of the star RW Aur. We found that the low-density region formed after the first knot propagation plays an important role in the collimation of the subsequent ones. Assuming only the thermal expansion of the subsequent emissions, qualitative estimates of the time taken to fill this area with the surrounding matter and the angle of jet scattering have been made. These estimates are consistent with observations and results of our modeling.

  18. Computational Modeling and Treatment Identification in the Myelodysplastic Syndromes.

    PubMed

    Drusbosky, Leylah M; Cogle, Christopher R

    2017-10-01

    This review discusses the need for computational modeling in myelodysplastic syndromes (MDS) and early test results. As our evolving understanding of MDS reveals a molecularly complicated disease, the need for sophisticated computer analytics is required to keep track of the number and complex interplay among the molecular abnormalities. Computational modeling and digital drug simulations using whole exome sequencing data input have produced early results showing high accuracy in predicting treatment response to standard of care drugs. Furthermore, the computational MDS models serve as clinically relevant MDS cell lines for pre-clinical assays of investigational agents. MDS is an ideal disease for computational modeling and digital drug simulations. Current research is focused on establishing the prediction value of computational modeling. Future research will test the clinical advantage of computer-informed therapy in MDS.

  19. Coarse-Grained Lattice Model Simulations of Sequence-Structure Fitness of a Ribosome-Inactivating Protein

    DTIC Science & Technology

    2007-11-05

    limits of what is considered practical when applying all-atom molecular - dynamics simulation methods. Lattice models provide computationally robust...of expectation values from the density of states. All-atom molecular - dynamics simulations provide the most rigorous sampling method to generate con... molecular - dynamics simulations of protein folding,6–9 reported studies of computing a heat capacity or other calorimetric observables have been limited to

  20. Conformation and Stability of Intramolecular Telomeric G-Quadruplexes: Sequence Effects in the Loops

    PubMed Central

    Sattin, Giovanna; Artese, Anna; Nadai, Matteo; Costa, Giosuè; Parrotta, Lucia; Alcaro, Stefano; Palumbo, Manlio; Richter, Sara N.

    2013-01-01

    Telomeres are guanine-rich sequences that protect the ends of chromosomes. These regions can fold into G-quadruplex structures and their stabilization by G-quadruplex ligands has been employed as an anticancer strategy. Genetic analysis in human telomeres revealed extensive allelic variation restricted to loop bases, indicating that the variant telomeric sequences maintain the ability to fold into G-quadruplex. To assess the effect of mutations in loop bases on G-quadruplex folding and stability, we performed a comprehensive analysis of mutant telomeric sequences by spectroscopic techniques, molecular dynamics simulations and gel electrophoresis. We found that when the first position in the loop was mutated from T to C or A the resulting structure adopted a less stable antiparallel topology; when the second position was mutated to C or A, lower thermal stability and no evident conformational change were observed; in contrast, substitution of the third position from A to C induced a more stable and original hybrid conformation, while mutation to T did not significantly affect G-quadruplex topology and stability. Our results indicate that allelic variations generate G-quadruplex telomeric structures with variable conformation and stability. This aspect needs to be taken into account when designing new potential anticancer molecules. PMID:24367632

Top