Design of Mariner 9 Science Sequences using Interactive Graphics Software
NASA Technical Reports Server (NTRS)
Freeman, J. E.; Sturms, F. M, Jr.; Webb, W. A.
1973-01-01
This paper discusses the analyst/computer system used to design the daily science sequences required to carry out the desired Mariner 9 science plan. The Mariner 9 computer environment, the development and capabilities of the science sequence design software, and the techniques followed in the daily mission operations are discussed. Included is a discussion of the overall mission operations organization and the individual components which played an essential role in the sequence design process. A summary of actual sequences processed, a discussion of problems encountered, and recommendations for future applications are given.
Streamlining the Design-to-Build Transition with Build-Optimization Software Tools.
Oberortner, Ernst; Cheng, Jan-Fang; Hillson, Nathan J; Deutsch, Samuel
2017-03-17
Scaling-up capabilities for the design, build, and test of synthetic biology constructs holds great promise for the development of new applications in fuels, chemical production, or cellular-behavior engineering. Construct design is an essential component in this process; however, not every designed DNA sequence can be readily manufactured, even using state-of-the-art DNA synthesis methods. Current biological computer-aided design and manufacture tools (bioCAD/CAM) do not adequately consider the limitations of DNA synthesis technologies when generating their outputs. Designed sequences that violate DNA synthesis constraints may require substantial sequence redesign or lead to price-premiums and temporal delays, which adversely impact the efficiency of the DNA manufacturing process. We have developed a suite of build-optimization software tools (BOOST) to streamline the design-build transition in synthetic biology engineering workflows. BOOST incorporates knowledge of DNA synthesis success determinants into the design process to output ready-to-build sequences, preempting the need for sequence redesign. The BOOST web application is available at https://boost.jgi.doe.gov and its Application Program Interfaces (API) enable integration into automated, customized DNA design processes. The herein presented results highlight the effectiveness of BOOST in reducing DNA synthesis costs and timelines.
Pancoska, Petr; Moravek, Zdenek; Moll, Ute M
2004-01-01
Nucleic acids are molecules of choice for both established and emerging nanoscale technologies. These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured. To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters. Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences. This method allows for the direct implementation of very complex and detailed requirements for the generated sequences, thus avoiding 'brute force' filtering. At the same time, these sequences have narrow distributions of melting temperatures. The molecular part of the design process can be done without computer assistance, using an efficient 'human engineering' approach by drawing a single blueprint graph that represents all generated sequences. Moreover, the method eliminates the necessity for extensive thermodynamic calculations. Melting temperature can be calculated only once (or not at all). In addition, the isostability of the sequences is independent of the selection of a particular set of thermodynamic parameters. Applications are presented for DNA sequence designs for microarrays, universal microarray zip sequences and electron transfer experiments.
SeqTrim: a high-throughput pipeline for pre-processing any type of sequence read
2010-01-01
Background High-throughput automated sequencing has enabled an exponential growth rate of sequencing data. This requires increasing sequence quality and reliability in order to avoid database contamination with artefactual sequences. The arrival of pyrosequencing enhances this problem and necessitates customisable pre-processing algorithms. Results SeqTrim has been implemented both as a Web and as a standalone command line application. Already-published and newly-designed algorithms have been included to identify sequence inserts, to remove low quality, vector, adaptor, low complexity and contaminant sequences, and to detect chimeric reads. The availability of several input and output formats allows its inclusion in sequence processing workflows. Due to its specific algorithms, SeqTrim outperforms other pre-processors implemented as Web services or standalone applications. It performs equally well with sequences from EST libraries, SSH libraries, genomic DNA libraries and pyrosequencing reads and does not lead to over-trimming. Conclusions SeqTrim is an efficient pipeline designed for pre-processing of any type of sequence read, including next-generation sequencing. It is easily configurable and provides a friendly interface that allows users to know what happened with sequences at every pre-processing stage, and to verify pre-processing of an individual sequence if desired. The recommended pipeline reveals more information about each sequence than previously described pre-processors and can discard more sequencing or experimental artefacts. PMID:20089148
Sequencing of Dust Filter Production Process Using Design Structure Matrix (DSM)
NASA Astrophysics Data System (ADS)
Sari, R. M.; Matondang, A. R.; Syahputri, K.; Anizar; Siregar, I.; Rizkya, I.; Ursula, C.
2018-01-01
Metal casting company produces machinery spare part for manufactures. One of the product produced is dust filter. Most of palm oil mill used this product. Since it is used in most of palm oil mill, company often have problems to address this product. One of problem is the disordered of production process. It carried out by the job sequencing. The important job that should be solved first, least implement, while less important job and could be completed later, implemented first. Design Structure Matrix (DSM) used to analyse and determine priorities in the production process. DSM analysis is sort of production process through dependency sequencing. The result of dependency sequences shows the sequence process according to the inter-process linkage considering before and after activities. Finally, it demonstrates their activities to the coupled activities for metal smelting, refining, grinding, cutting container castings, metal expenditure of molds, metal casting, coating processes, and manufacture of molds of sand.
Sequence information signal processor for local and global string comparisons
Peterson, John C.; Chow, Edward T.; Waterman, Michael S.; Hunkapillar, Timothy J.
1997-01-01
A sequence information signal processing integrated circuit chip designed to perform high speed calculation of a dynamic programming algorithm based upon the algorithm defined by Waterman and Smith. The signal processing chip of the present invention is designed to be a building block of a linear systolic array, the performance of which can be increased by connecting additional sequence information signal processing chips to the array. The chip provides a high speed, low cost linear array processor that can locate highly similar global sequences or segments thereof such as contiguous subsequences from two different DNA or protein sequences. The chip is implemented in a preferred embodiment using CMOS VLSI technology to provide the equivalent of about 400,000 transistors or 100,000 gates. Each chip provides 16 processing elements, and is designed to provide 16 bit, two's compliment operation for maximum score precision of between -32,768 and +32,767. It is designed to provide a comparison between sequences as long as 4,194,304 elements without external software and between sequences of unlimited numbers of elements with the aid of external software. Each sequence can be assigned different deletion and insertion weight functions. Each processor is provided with a similarity measure device which is independently variable. Thus, each processor can contribute to maximum value score calculation using a different similarity measure.
Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems
Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.
2016-01-01
Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383
Czar, Michael J; Cai, Yizhi; Peccoud, Jean
2009-07-01
Chemical synthesis of custom DNA made to order calls for software streamlining the design of synthetic DNA sequences. GenoCAD (www.genocad.org) is a free web-based application to design protein expression vectors, artificial gene networks and other genetic constructs composed of multiple functional blocks called genetic parts. By capturing design strategies in grammatical models of DNA sequences, GenoCAD guides the user through the design process. By successively clicking on icons representing structural features or actual genetic parts, complex constructs composed of dozens of functional blocks can be designed in a matter of minutes. GenoCAD automatically derives the construct sequence from its comprehensive libraries of genetic parts. Upon completion of the design process, users can download the sequence for synthesis or further analysis. Users who elect to create a personal account on the system can customize their workspace by creating their own parts libraries, adding new parts to the libraries, or reusing designs to quickly generate sets of related constructs.
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.
2018-03-01
The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
The purpose of this program is to demonstrate the technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per watt peak. Program efforts included: preliminary design review, preliminary cell fabrication using the proposed process sequence, verification of sandblasting back cleanup, study of resist parameters, evaluation of pull strength of the proposed metallization, measurement of contact resistance of Electroless Ni contacts, optimization of process parameter, design of the MEPSDU module, identification and testing of insulator tapes, development of a lamination process sequence, identification, discussions, demonstrations and visits with candidate equipment vendors, evaluation of proposals for tabbing and stringing machine.
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
Developing a Methodology for Designing Systems of Instruction.
ERIC Educational Resources Information Center
Carpenter, Polly
This report presents a description of a process for instructional system design, identification of the steps in the design process, and determination of their sequence and interrelationships. As currently envisioned, several interrelated steps must be taken, five of which provide the inputs to the final design process. There are analysis of…
Toward a More Flexible Web-Based Framework for Multidisciplinary Design
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Salas, A. O.
1999-01-01
In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.
Sevy, Alexander M.; Jacobs, Tim M.; Crowe, James E.; Meiler, Jens
2015-01-01
Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a ‘single state’ design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design “promiscuous”, polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes. PMID:26147100
Designing of deployment sequence for braking and drift systems in atmosphere of Mars and Venus
NASA Astrophysics Data System (ADS)
Vorontsov, Victor
2006-07-01
Analysis of project development and space research using contact method, namely, by means of automatic descent modules and balloons shows that designing formation of entry, descent and landing (EDL) sequence and operation in the atmosphere are of great importance. This process starts at the very beginning of designing, has undergone a lot of iterations and influences processing of normal operation results. Along with designing of descent module systems, including systems of braking in the atmosphere, designing of flight operation sequence and trajectories of motion in the atmosphere is performed. As the entire operation sequence and transfer from one phase to another was correctly chosen, the probability of experiment success on the whole and efficiency of application of various systems vary. By now the most extensive experience of Russian specialists in research of terrestrial planets has been gained with the help of automatic interplanetary stations “Mars”, “Venera”, “Vega” which had descent modules and drifting in the atmosphere balloons. Particular interest and complicity of formation of EDL and drift sequence in the atmosphere of these planets arise from radically different operation conditions, in particular, strongly rarefied atmosphere of the one planet and extremely dense atmosphere of another. Consequently, this determines the choice of braking systems and their parameters and method of EDL consequence formation. At the same time there are general fundamental methods and designed research techniques that allowed taking general technical approach to designing of EDL and drift sequence in the atmosphere.
Development of Integrated Programs for Aerospace-vehicle design (IPAD): Reference design process
NASA Technical Reports Server (NTRS)
Meyer, D. D.
1979-01-01
The airplane design process and its interfaces with manufacturing and customer operations are documented to be used as criteria for the development of integrated programs for the analysis, design, and testing of aerospace vehicles. Topics cover: design process management, general purpose support requirements, design networks, and technical program elements. Design activity sequences are given for both supersonic and subsonic commercial transports, naval hydrofoils, and military aircraft.
Artan, N; Wilderer, P; Orhon, D; Morgenroth, E; Ozgür, N
2001-01-01
The Sequencing Batch Reactor (SBR) process for carbon and nutrient removal is subject to extensive research, and it is finding a wider application in full-scale installations. Despite the growing popularity, however, a widely accepted approach to process analysis and modeling, a unified design basis, and even a common terminology are still lacking; this situation is now regarded as the major obstacle hindering broader practical application of the SBR. In this paper a rational dimensioning approach is proposed for nutrient removal SBRs based on scientific information on process stoichiometry and modelling, also emphasizing practical constraints in design and operation.
Nine Suggestions for Improving Sequences. Technology.
ERIC Educational Resources Information Center
Muro, Don
1995-01-01
Maintains that many educators are using sequences to create accompaniments and practice tapes geared to student abilities. Describes musical instruction using Musical Instrument Digital Interface (MIDI). Discusses eight suggestions designed to make the process of sequencing more efficient. (CFR)
Program Synthesizes UML Sequence Diagrams
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2006-01-01
A computer program called "Rational Sequence" generates Universal Modeling Language (UML) sequence diagrams of a target Java program running on a Java virtual machine (JVM). Rational Sequence thereby performs a reverse engineering function that aids in the design documentation of the target Java program. Whereas previously, the construction of sequence diagrams was a tedious manual process, Rational Sequence generates UML sequence diagrams automatically from the running Java code.
Unified Engineering Software System
NASA Technical Reports Server (NTRS)
Purves, L. R.; Gordon, S.; Peltzman, A.; Dube, M.
1989-01-01
Collection of computer programs performs diverse functions in prototype engineering. NEXUS, NASA Engineering Extendible Unified Software system, is research set of computer programs designed to support full sequence of activities encountered in NASA engineering projects. Sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. Primarily addresses process of prototype engineering, task of getting single or small number of copies of product to work. Written in FORTRAN 77 and PROLOG.
On the design of henon and logistic map-based random number generator
NASA Astrophysics Data System (ADS)
Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah
2017-10-01
The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.
Nakano, Shogo; Asano, Yasuhisa
2015-02-03
Development of software and methods for design of complete sequences of functional proteins could contribute to studies of protein engineering and protein evolution. To this end, we developed the INTMSAlign software, and used it to design functional proteins and evaluate their usefulness. The software could assign both consensus and correlation residues of target proteins. We generated three protein sequences with S-selective hydroxynitrile lyase (S-HNL) activity, which we call designed S-HNLs; these proteins folded as efficiently as the native S-HNL. Sequence and biochemical analysis of the designed S-HNLs suggested that accumulation of neutral mutations occurs during the process of S-HNLs evolution from a low-activity form to a high-activity (native) form. Taken together, our results demonstrate that our software and the associated methods could be applied not only to design of complete sequences, but also to predictions of protein evolution, especially within families such as esterases and S-HNLs.
NASA Astrophysics Data System (ADS)
Nakano, Shogo; Asano, Yasuhisa
2015-02-01
Development of software and methods for design of complete sequences of functional proteins could contribute to studies of protein engineering and protein evolution. To this end, we developed the INTMSAlign software, and used it to design functional proteins and evaluate their usefulness. The software could assign both consensus and correlation residues of target proteins. We generated three protein sequences with S-selective hydroxynitrile lyase (S-HNL) activity, which we call designed S-HNLs; these proteins folded as efficiently as the native S-HNL. Sequence and biochemical analysis of the designed S-HNLs suggested that accumulation of neutral mutations occurs during the process of S-HNLs evolution from a low-activity form to a high-activity (native) form. Taken together, our results demonstrate that our software and the associated methods could be applied not only to design of complete sequences, but also to predictions of protein evolution, especially within families such as esterases and S-HNLs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHEN, JOANNA; SIMIRENKO, LISA; TAPASWI, MANJIRI
The DIVA software interfaces a process in which researchers design their DNA with a web-based graphical user interface, submit their designs to a central queue, and a few weeks later receive their sequence-verified clonal constructs. Each researcher independently designs the DNA to be constructed with a web-based BioCAD tool, and presses a button to submit their designs to a central queue. Researchers have web-based access to their DNA design queues, and can track the progress of their submitted designs as they progress from "evaluation", to "waiting for reagents", to "in progress", to "complete". Researchers access their completed constructs through themore » central DNA repository. Along the way, all DNA construction success/failure rates are captured in a central database. Once a design has been submitted to the queue, a small number of dedicated staff evaluate the design for feasibility and provide feedback to the responsible researcher if the design is either unreasonable (e.g., encompasses a combinatorial library of a billion constructs) or small design changes could significantly facilitate the downstream implementation process. The dedicated staff then use DNA assembly design automation software to optimize the DNA construction process for the design, leveraging existing parts from the DNA repository where possible and ordering synthetic DNA where necessary. SynTrack software manages the physical locations and availability of the various requisite reagents and process inputs (e.g., DNA templates). Once all requisite process inputs are available, the design progresses from "waiting for reagents" to "in progress" in the design queue. Human-readable and machine-parseable DNA construction protocols output by the DNA assembly design automation software are then executed by the dedicated staff exploiting lab automation devices wherever possible. Since the all employed DNA construction methods are sequence-agnostic, standardized (utilize the same enzymatic master mixes and reaction conditions), completely independent DNA construction tasks can be aggregated into the same multi-well plates and pursued in parallel. The resulting sets of cloned constructs can then be screened by high-throughput next-gen sequencing platforms for sequence correctness. A combination of long read-length (e.g., PacBio) and paired-end read platforms (e.g., Illumina) would be exploited depending the particular task at hand (e.g., PacBio might be sufficient to screen a set of pooled constructs with significant gene divergence). Post sequence verification, designs for which at least one correct clone was identified will progress to a "complete" status, while designs for which no correct clones wereidentified will progress to a "failure" status. Depending on the failure mode (e.g., no transformants), and how many prior attempts/variations of assembly protocol have been already made for a given design, subsequent attempts may be made or the design can progress to a "permanent failure" state. All success and failure rate information will be captured during the process, including at which stage a given clonal construction procedure failed (e.g., no PCR product) and what the exact failure was (e.g. assembly piece 2 missing). This success/failure rate data can be leveraged to refine the DNA assembly design process.« less
A Web-Based Monitoring System for Multidisciplinary Design Projects
NASA Technical Reports Server (NTRS)
Rogers, James L.; Salas, Andrea O.; Weston, Robert P.
1998-01-01
In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.
A Web-Based System for Monitoring and Controlling Multidisciplinary Design Projects
NASA Technical Reports Server (NTRS)
Salas, Andrea O.; Rogers, James L.
1997-01-01
In today's competitive environment, both industry and government agencies are under enormous pressure to reduce the time and cost of multidisciplinary design projects. A number of frameworks have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. An examination of current frameworks reveals weaknesses in various areas such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, in conjunction with an existing framework, can improve these areas of weakness. This paper describes a system that executes a sequence of programs, monitors and controls the design process through a Web-based interface, and visualizes intermediate and final results through the use of Java(Tm) applets. A small sample problem, which includes nine processes with two analysis programs that are coupled to an optimizer, is used to demonstrate the feasibility of this approach.
On the decomposition of synchronous state mechines using sequence invariant state machines
NASA Technical Reports Server (NTRS)
Hebbalalu, K.; Whitaker, S.; Cameron, K.
1992-01-01
This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.
Designing a fixed-blade gang ripsaw arbor with a pencil
Charles J. Gatchell; Charles J. Gatchell
1996-01-01
This paper presents a step-by-step procedure for designing the "best" sequence of saw spacings for a fixed-blade gang ripsaw arbor. Using the information contained in a cutting bill and knowledge of the lumber width distributions to be processed, thousands of possible saw spacing sequences can be reduced to a few good ones.
Automated Array Assembly, Phase 2. Low-cost Solar Array Project, Task 4
NASA Technical Reports Server (NTRS)
Lopez, M.
1978-01-01
Work was done to verify the technological readiness of a select process sequence with respect to satisfying the Low Cost Solar Array Project objectives of meeting the designated goals of $.50 per peak watt in 1986 (1975 dollars). The sequence examined consisted of: (1) 3 inches diameter as-sawn Czochralski grown 1:0:0 silicon, (2) texture etching, (3) ion implanting, (4) laser annealing, (5) screen printing of ohmic contacts and (6) sprayed anti-reflective coatings. High volume production projections were made on the selected process sequence. Automated processing and movement of hardware at high rates were conceptualized to satisfy the PROJECT's 500 MW/yr capability. A production plan was formulated with flow diagrams integrating the various processes in the cell fabrication sequence.
2009-01-01
Background One of the most common and efficient methods for detecting mutations in genes is PCR amplification followed by direct sequencing. Until recently, the process of designing PCR assays has been to focus on individual assay parameters rather than concentrating on matching conditions for a set of assays. Primers for each individual assay were selected based on location and sequence concerns. The two primer sequences were then iteratively adjusted to make the individual assays work properly. This generally resulted in groups of assays with different annealing temperatures that required the use of multiple thermal cyclers or multiple passes in a single thermal cycler making diagnostic testing time-consuming, laborious and expensive. These factors have severely hampered diagnostic testing services, leaving many families without an answer for the exact cause of a familial genetic disease. A search of GeneTests for sequencing analysis of the entire coding sequence for genes that are known to cause muscular dystrophies returns only a small list of laboratories that perform comprehensive gene panels. The hypothesis for the study was that a complete set of universal assays can be designed to amplify and sequence any gene or family of genes using computer aided design tools. If true, this would allow automation and optimization of the mutation detection process resulting in reduced cost and increased throughput. Results An automated process has been developed for the detection of deletions, duplications/insertions and point mutations in any gene or family of genes and has been applied to ten genes known to bear mutations that cause muscular dystrophy: DMD; CAV3; CAPN3; FKRP; TRIM32; LMNA; SGCA; SGCB; SGCG; SGCD. Using this process, mutations have been found in five DMD patients and four LGMD patients (one in the FKRP gene, one in the CAV3 gene, and two likely causative heterozygous pairs of variations in the CAPN3 gene of two other patients). Methods and assay sequences are reported in this paper. Conclusion This automated process allows laboratories to discover DNA variations in a short time and at low cost. PMID:19835634
Bennett, Richard R; Schneider, Hal E; Estrella, Elicia; Burgess, Stephanie; Cheng, Andrew S; Barrett, Caitlin; Lip, Va; Lai, Poh San; Shen, Yiping; Wu, Bai-Lin; Darras, Basil T; Beggs, Alan H; Kunkel, Louis M
2009-10-18
One of the most common and efficient methods for detecting mutations in genes is PCR amplification followed by direct sequencing. Until recently, the process of designing PCR assays has been to focus on individual assay parameters rather than concentrating on matching conditions for a set of assays. Primers for each individual assay were selected based on location and sequence concerns. The two primer sequences were then iteratively adjusted to make the individual assays work properly. This generally resulted in groups of assays with different annealing temperatures that required the use of multiple thermal cyclers or multiple passes in a single thermal cycler making diagnostic testing time-consuming, laborious and expensive.These factors have severely hampered diagnostic testing services, leaving many families without an answer for the exact cause of a familial genetic disease. A search of GeneTests for sequencing analysis of the entire coding sequence for genes that are known to cause muscular dystrophies returns only a small list of laboratories that perform comprehensive gene panels.The hypothesis for the study was that a complete set of universal assays can be designed to amplify and sequence any gene or family of genes using computer aided design tools. If true, this would allow automation and optimization of the mutation detection process resulting in reduced cost and increased throughput. An automated process has been developed for the detection of deletions, duplications/insertions and point mutations in any gene or family of genes and has been applied to ten genes known to bear mutations that cause muscular dystrophy: DMD; CAV3; CAPN3; FKRP; TRIM32; LMNA; SGCA; SGCB; SGCG; SGCD. Using this process, mutations have been found in five DMD patients and four LGMD patients (one in the FKRP gene, one in the CAV3 gene, and two likely causative heterozygous pairs of variations in the CAPN3 gene of two other patients). Methods and assay sequences are reported in this paper. This automated process allows laboratories to discover DNA variations in a short time and at low cost.
A microprogrammable radar controller
NASA Technical Reports Server (NTRS)
Law, D. C.
1986-01-01
The Wave Propagation Lab. has completed the design and construction of a microprogrammable radar controller for atmospheric wind profiling. Unlike some radar controllers using state machines or hardwired logic for radar timing, this design is a high speed programmable sequencer with signal processing resources. A block diagram of the device is shown. The device is a single 8 1/2 inch by 10 1/2 inch printed circuit board and consists of three main subsections: (1) the host computer interface; (2) the microprogram sequencer; and (3) the signal processing circuitry. Each of these subsections are described in detail.
j5 DNA assembly design automation.
Hillson, Nathan J
2014-01-01
Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.
Deroost, Natacha; Coomans, Daphné
2018-02-01
We examined the role of sequence awareness in a pure perceptual sequence learning design. Participants had to react to the target's colour that changed according to a perceptual sequence. By varying the mapping of the target's colour onto the response keys, motor responses changed randomly. The effect of sequence awareness on perceptual sequence learning was determined by manipulating the learning instructions (explicit versus implicit) and assessing the amount of sequence awareness after the experiment. In the explicit instruction condition (n = 15), participants were instructed to intentionally search for the colour sequence, whereas in the implicit instruction condition (n = 15), they were left uninformed about the sequenced nature of the task. Sequence awareness after the sequence learning task was tested by means of a questionnaire and the process-dissociation-procedure. The results showed that the instruction manipulation had no effect on the amount of perceptual sequence learning. Based on their report to have actively applied their sequence knowledge during the experiment, participants were subsequently regrouped in a sequence strategy group (n = 14, of which 4 participants from the implicit instruction condition and 10 participants from the explicit instruction condition) and a no-sequence strategy group (n = 16, of which 11 participants from the implicit instruction condition and 5 participants from the explicit instruction condition). Only participants of the sequence strategy group showed reliable perceptual sequence learning and sequence awareness. These results indicate that perceptual sequence learning depends upon the continuous employment of strategic cognitive control processes on sequence knowledge. Sequence awareness is suggested to be a necessary but not sufficient condition for perceptual learning to take place. Copyright © 2018 Elsevier B.V. All rights reserved.
Smith, Gretchen N. L.; Conway, Christopher M.; Bauernschmidt, Althea; Pisoni, David B.
2015-01-01
Recent research suggests that language acquisition may rely on domain-general learning abilities, such as structured sequence processing, which is the ability to extract, encode, and represent structured patterns in a temporal sequence. If structured sequence processing supports language, then it may be possible to improve language function by enhancing this foundational learning ability. The goal of the present study was to use a novel computerized training task as a means to better understand the relationship between structured sequence processing and language function. Participants first were assessed on pre-training tasks to provide baseline behavioral measures of structured sequence processing and language abilities. Participants were then quasi-randomly assigned to either a treatment group involving adaptive structured visuospatial sequence training, a treatment group involving adaptive non-structured visuospatial sequence training, or a control group. Following four days of sequence training, all participants were assessed with the same pre-training measures. Overall comparison of the post-training means revealed no group differences. However, in order to examine the potential relations between sequence training, structured sequence processing, and language ability, we used a mediation analysis that showed two competing effects. In the indirect effect, adaptive sequence training with structural regularities had a positive impact on structured sequence processing performance, which in turn had a positive impact on language processing. This finding not only identifies a potential novel intervention to treat language impairments but also may be the first demonstration that structured sequence processing can be improved and that this, in turn, has an impact on language processing. However, in the direct effect, adaptive sequence training with structural regularities had a direct negative impact on language processing. This unexpected finding suggests that adaptive training with structural regularities might potentially interfere with language processing. Taken together, these findings underscore the importance of pursuing designs that promote a better understanding of the mechanisms underlying training-related changes, so that regimens can be developed that help reduce these types of negative effects while simultaneously maximizing the benefits to outcome measures of interest. PMID:25946222
Smith, Gretchen N L; Conway, Christopher M; Bauernschmidt, Althea; Pisoni, David B
2015-01-01
Recent research suggests that language acquisition may rely on domain-general learning abilities, such as structured sequence processing, which is the ability to extract, encode, and represent structured patterns in a temporal sequence. If structured sequence processing supports language, then it may be possible to improve language function by enhancing this foundational learning ability. The goal of the present study was to use a novel computerized training task as a means to better understand the relationship between structured sequence processing and language function. Participants first were assessed on pre-training tasks to provide baseline behavioral measures of structured sequence processing and language abilities. Participants were then quasi-randomly assigned to either a treatment group involving adaptive structured visuospatial sequence training, a treatment group involving adaptive non-structured visuospatial sequence training, or a control group. Following four days of sequence training, all participants were assessed with the same pre-training measures. Overall comparison of the post-training means revealed no group differences. However, in order to examine the potential relations between sequence training, structured sequence processing, and language ability, we used a mediation analysis that showed two competing effects. In the indirect effect, adaptive sequence training with structural regularities had a positive impact on structured sequence processing performance, which in turn had a positive impact on language processing. This finding not only identifies a potential novel intervention to treat language impairments but also may be the first demonstration that structured sequence processing can be improved and that this, in turn, has an impact on language processing. However, in the direct effect, adaptive sequence training with structural regularities had a direct negative impact on language processing. This unexpected finding suggests that adaptive training with structural regularities might potentially interfere with language processing. Taken together, these findings underscore the importance of pursuing designs that promote a better understanding of the mechanisms underlying training-related changes, so that regimens can be developed that help reduce these types of negative effects while simultaneously maximizing the benefits to outcome measures of interest.
Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun
2012-01-01
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
100 Gbps Wireless System and Circuit Design Using Parallel Spread-Spectrum Sequencing
NASA Astrophysics Data System (ADS)
Scheytt, J. Christoph; Javed, Abdul Rehman; Bammidi, Eswara Rao; KrishneGowda, Karthik; Kallfass, Ingmar; Kraemer, Rolf
2017-09-01
In this article mixed analog/digital signal processing techniques based on parallel spread-spectrum sequencing (PSSS) and radio frequency (RF) carrier synchronization for ultra-broadband wireless communication are investigated on system and circuit level.
PrimerMapper: high throughput primer design and graphical assembly for PCR and SNP detection
O’Halloran, Damien M.
2016-01-01
Primer design represents a widely employed gambit in diverse molecular applications including PCR, sequencing, and probe hybridization. Variations of PCR, including primer walking, allele-specific PCR, and nested PCR provide specialized validation and detection protocols for molecular analyses that often require screening large numbers of DNA fragments. In these cases, automated sequence retrieval and processing become important features, and furthermore, a graphic that provides the user with a visual guide to the distribution of designed primers across targets is most helpful in quickly ascertaining primer coverage. To this end, I describe here, PrimerMapper, which provides a comprehensive graphical user interface that designs robust primers from any number of inputted sequences while providing the user with both, graphical maps of primer distribution for each inputted sequence, and also a global assembled map of all inputted sequences with designed primers. PrimerMapper also enables the visualization of graphical maps within a browser and allows the user to draw new primers directly onto the webpage. Other features of PrimerMapper include allele-specific design features for SNP genotyping, a remote BLAST window to NCBI databases, and remote sequence retrieval from GenBank and dbSNP. PrimerMapper is hosted at GitHub and freely available without restriction. PMID:26853558
Unimodular sequence design under frequency hopping communication compatibility requirements
NASA Astrophysics Data System (ADS)
Ge, Peng; Cui, Guolong; Kong, Lingjiang; Yang, Jianyu
2016-12-01
The integrated design for both radar and anonymous communication has drawn more attention recently since wireless communication system appeals to enhance security and reliability. Given the frequency hopping (FH) communication system, an effective way to realize integrated design is to meet the spectrum compatibility between these two systems. The paper deals with a unimodular sequence design technique which considers optimizing both the spectrum compatibility and peak sidelobes levels (PSL) of auto-correlation function (ACF). The spectrum compatibility requirement realizes anonymous communication for the FH system and provides this system lower probability of intercept (LPI) since the spectrum of the FH system is hidden in that of the radar system. The proposed algorithm, named generalized fitting template (GFT) technique, converts the sequence optimization design problem to a iterative fitting process. In this process, the power spectrum density (PSD) and PSL behaviors of the generated sequences fit both PSD and PSL templates progressively. Two templates are established based on the spectrum compatibility requirement and the expected PSL. As noted, in order to ensure the communication security and reliability, spectrum compatibility requirement is given a higher priority to achieve in the GFT algorithm. This algorithm realizes this point by adjusting the weight adaptively between these two terms during the iteration process. The simulation results are analyzed in terms of bit error rate (BER), PSD, PSL, and signal-interference rate (SIR) for both the radar and FH systems. The performance of GFT is compared with SCAN, CAN, FRE, CYC, and MAT algorithms in the above aspects, which shows its good effectiveness.
The Automated Array Assembly Task of the Low-cost Silicon Solar Array Project, Phase 2
NASA Technical Reports Server (NTRS)
Coleman, M. G.; Grenon, L.; Pastirik, E. M.; Pryor, R. A.; Sparks, T. G.
1978-01-01
An advanced process sequence for manufacturing high efficiency solar cells and modules in a cost-effective manner is discussed. Emphasis is on process simplicity and minimizing consumed materials. The process sequence incorporates texture etching, plasma processes for damage removal and patterning, ion implantation, low pressure silicon nitride deposition, and plated metal. A reliable module design is presented. Specific process step developments are given. A detailed cost analysis was performed to indicate future areas of fruitful cost reduction effort. Recommendations for advanced investigations are included.
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
Design work for a photovoltaic module, fabricated using single crystal silicon dendritic web sheet material, resulted in the identification of surface treatment to the module glass superstrate which improved module efficiencies. A final solar module environmental test, a simulated hailstone impact test, was conducted on full size module superstrates to verify that the module's tempered glass superstrate can withstand specified hailstone impacts near the corners and edges of the module. Process sequence design work on the metallization process selective, liquid dopant investigation, dry processing, and antireflective/photoresist application technique tasks, and optimum thickness for Ti/Pd are discussed. A noncontact cleaning method for raw web cleaning was identified and antireflective and photoresist coatings for the dendritic webs were selected. The design of a cell string conveyor, an interconnect feed system, rolling ultrasonic spot bonding heat, and the identification of the optimal commercially available programmable control system are also discussed. An economic analysis to assess cost goals of the process sequence is also given.
NASA Astrophysics Data System (ADS)
Xia, Huanxiong; Xiang, Dong; Yang, Wang; Mou, Peng
2014-12-01
Low-temperature plasma technique is one of the critical techniques in IC manufacturing process, such as etching and thin-film deposition, and the uniformity greatly impacts the process quality, so the design for the plasma uniformity control is very important but difficult. It is hard to finely and flexibly regulate the spatial distribution of the plasma in the chamber via controlling the discharge parameters or modifying the structure in zero-dimensional space, and it just can adjust the overall level of the process factors. In the view of this problem, a segmented non-uniform dielectric module design solution is proposed for the regulation of the plasma profile in a CCP chamber. The solution achieves refined and flexible regulation of the plasma profile in the radial direction via configuring the relative permittivity and the width of each segment. In order to solve this design problem, a novel simulation-based auto-design approach is proposed, which can automatically design the positional sequence with multi independent variables to make the output target profile in the parameterized simulation model approximate the one that users preset. This approach employs an idea of quasi-closed-loop control system, and works in an iterative mode. It starts from initial values of the design variable sequences, and predicts better sequences via the feedback of the profile error between the output target profile and the expected one. It never stops until the profile error is narrowed in the preset tolerance.
URPD: a specific product primer design tool
2012-01-01
Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312
URPD: a specific product primer design tool.
Chuang, Li-Yeh; Cheng, Yu-Huei; Yang, Cheng-Hong
2012-06-19
Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/.
Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho
2017-11-01
High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.
Dynamic peptide libraries for the discovery of supramolecular nanomaterials
NASA Astrophysics Data System (ADS)
Pappas, Charalampos G.; Shafi, Ramim; Sasselli, Ivan R.; Siccardi, Henry; Wang, Tong; Narang, Vishal; Abzalimov, Rinat; Wijerathne, Nadeesha; Ulijn, Rein V.
2016-11-01
Sequence-specific polymers, such as oligonucleotides and peptides, can be used as building blocks for functional supramolecular nanomaterials. The design and selection of suitable self-assembling sequences is, however, challenging because of the vast combinatorial space available. Here we report a methodology that allows the peptide sequence space to be searched for self-assembling structures. In this approach, unprotected homo- and heterodipeptides (including aromatic, aliphatic, polar and charged amino acids) are subjected to continuous enzymatic condensation, hydrolysis and sequence exchange to create a dynamic combinatorial peptide library. The free-energy change associated with the assembly process itself gives rise to selective amplification of self-assembling candidates. By changing the environmental conditions during the selection process, different sequences and consequent nanoscale morphologies are selected.
Dynamic peptide libraries for the discovery of supramolecular nanomaterials.
Pappas, Charalampos G; Shafi, Ramim; Sasselli, Ivan R; Siccardi, Henry; Wang, Tong; Narang, Vishal; Abzalimov, Rinat; Wijerathne, Nadeesha; Ulijn, Rein V
2016-11-01
Sequence-specific polymers, such as oligonucleotides and peptides, can be used as building blocks for functional supramolecular nanomaterials. The design and selection of suitable self-assembling sequences is, however, challenging because of the vast combinatorial space available. Here we report a methodology that allows the peptide sequence space to be searched for self-assembling structures. In this approach, unprotected homo- and heterodipeptides (including aromatic, aliphatic, polar and charged amino acids) are subjected to continuous enzymatic condensation, hydrolysis and sequence exchange to create a dynamic combinatorial peptide library. The free-energy change associated with the assembly process itself gives rise to selective amplification of self-assembling candidates. By changing the environmental conditions during the selection process, different sequences and consequent nanoscale morphologies are selected.
Design of lunar base observatories
NASA Technical Reports Server (NTRS)
Johnson, Stewart W.
1988-01-01
Several recently suggested concepts for conducting astronomy from a lunar base are cited. Then, the process and sequence of events that will be required to design an observatory to be emplaced on the Moon are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillson, Nathan
j5 automates and optimizes the design of the molecular biological process of cloning/constructing DNA. j5 enables users to benefit from (combinatorial) multi-part scar-less SLIC, Gibson, CPEC, Golden Gate assembly, or variants thereof, for which automation software does not currently exist, without the intense labor currently associated with the process. j5 inputs a list of the DNA sequences to be assembled, along with a Genbank, FASTA, jbei-seq, or SBOL v1.1 format sequence file for each DNA source. Given the list of DNA sequences to be assembled, j5 first determines the cost-minimizing assembly strategy for each part (direct synthesis, PCR/SOE, or oligo-embedding),more » designs DNA oligos with Primer3, adds flanking homology sequences (SLIC, Gibson, and CPEC; optimized with Primer3 for CPEC) or optimized overhang sequences (Golden Gate) to the oligos and direct synthesis pieces, and utilizes BLAST to check against oligo mis-priming and assembly piece incompatibility events. After identifying DNA oligos that are already contained within a local collection for reuse, the program estimates the total cost of direct synthesis and new oligos to be ordered. In the instance that j5 identifies putative assembly piece incompatibilities (multiple pieces with high flanking sequence homology), the program suggests hierarchical subassemblies where possible. The program outputs a comma-separated value (CSV) file, viewable via Excel or other spreadsheet software, that contains assembly design information (such as the PCR/SOE reactions to perform, their anticipated sizes and sequences, etc.) as well as a properly annotated genbank file containing the sequence resulting from the assembly, and appends the local oligo library with the oligos to be ordered j5 condenses multiple independent assembly projects into 96-well format for high-throughput liquid-handling robotics platforms, and generates configuration files for the PR-PR biology-friendly robot programming language. j5 thus provides a new way to design DNA assembly procedures much more productively and efficiently, not only in terms of time, but also in terms of cost. To a large extent, however, j5 does not allow people to do something that could not be done before by hand given enough time and effort. An exception to this is that, since the very act of using j5 to design the DNA assembly process standardizes the experimental details and workflow, j5 enables a single person to concurrently perform the independent DNA construction tasks of an entire group of researchers. Currently, this is not readily possible, since separate researchers employ disparate design strategies and workflows, and furthermore, their designs and workflows are very infrequently fully captured in an electronic format which is conducive to automation.« less
A step-by-step introduction to rule-based design of synthetic genetic constructs using GenoCAD.
Wilson, Mandy L; Hertzberg, Russell; Adam, Laura; Peccoud, Jean
2011-01-01
GenoCAD is an open source web-based system that provides a streamlined, rule-driven process for designing genetic sequences. GenoCAD provides a graphical interface that allows users to design sequences consistent with formalized design strategies specific to a domain, organization, or project. Design strategies include limited sets of user-defined parts and rules indicating how these parts are to be combined in genetic constructs. In addition to reducing design time to minutes, GenoCAD improves the quality and reliability of the finished sequence by ensuring that the designs follow established rules of sequence construction. GenoCAD.org is a publicly available instance of GenoCAD that can be found at www.genocad.org. The source code and latest build are available from SourceForge to allow advanced users to install and customize GenoCAD for their unique needs. This chapter focuses primarily on how the GenoCAD tools can be used to organize genetic parts into customized personal libraries, then how these libraries can be used to design sequences. In addition, GenoCAD's parts management system and search capabilities are described in detail. Instructions are provided for installing a local instance of GenoCAD on a server. Some of the future enhancements of this rapidly evolving suite of applications are briefly described. Copyright © 2011 Elsevier Inc. All rights reserved.
Registry in a tube: multiplexed pools of retrievable parts for genetic design space exploration
Woodruff, Lauren B. A.; Gorochowski, Thomas E.; Roehner, Nicholas; Densmore, Douglas; Gordon, D. Benjamin; Nicol, Robert
2017-01-01
Abstract Genetic designs can consist of dozens of genes and hundreds of genetic parts. After evaluating a design, it is desirable to implement changes without the cost and burden of starting the construction process from scratch. Here, we report a two-step process where a large design space is divided into deep pools of composite parts, from which individuals are retrieved and assembled to build a final construct. The pools are built via multiplexed assembly and sequenced using next-generation sequencing. Each pool consists of ∼20 Mb of up to 5000 unique and sequence-verified composite parts that are barcoded for retrieval by PCR. This approach is applied to a 16-gene nitrogen fixation pathway, which is broken into pools containing a total of 55 848 composite parts (71.0 Mb). The pools encompass an enormous design space (1043 possible 23 kb constructs), from which an algorithm-guided 192-member 4.5 Mb library is built. Next, all 1030 possible genetic circuits based on 10 repressors (NOR/NOT gates) are encoded in pools where each repressor is fused to all permutations of input promoters. These demonstrate that multiplexing can be applied to encompass entire design spaces from which individuals can be accessed and evaluated. PMID:28007941
Camilo, Cesar M; Lima, Gustavo M A; Maluf, Fernando V; Guido, Rafael V C; Polikarpov, Igor
2016-01-01
Following burgeoning genomic and transcriptomic sequencing data, biochemical and molecular biology groups worldwide are implementing high-throughput cloning and mutagenesis facilities in order to obtain a large number of soluble proteins for structural and functional characterization. Since manual primer design can be a time-consuming and error-generating step, particularly when working with hundreds of targets, the automation of primer design process becomes highly desirable. HTP-OligoDesigner was created to provide the scientific community with a simple and intuitive online primer design tool for both laboratory-scale and high-throughput projects of sequence-independent gene cloning and site-directed mutagenesis and a Tm calculator for quick queries.
Low-Cost, High-Throughput Sequencing of DNA Assemblies Using a Highly Multiplexed Nextera Process.
Shapland, Elaine B; Holmes, Victor; Reeves, Christopher D; Sorokin, Elena; Durot, Maxime; Platt, Darren; Allen, Christopher; Dean, Jed; Serber, Zach; Newman, Jack; Chandran, Sunil
2015-07-17
In recent years, next-generation sequencing (NGS) technology has greatly reduced the cost of sequencing whole genomes, whereas the cost of sequence verification of plasmids via Sanger sequencing has remained high. Consequently, industrial-scale strain engineers either limit the number of designs or take short cuts in quality control. Here, we show that over 4000 plasmids can be completely sequenced in one Illumina MiSeq run for less than $3 each (15× coverage), which is a 20-fold reduction over using Sanger sequencing (2× coverage). We reduced the volume of the Nextera tagmentation reaction by 100-fold and developed an automated workflow to prepare thousands of samples for sequencing. We also developed software to track the samples and associated sequence data and to rapidly identify correctly assembled constructs having the fewest defects. As DNA synthesis and assembly become a centralized commodity, this NGS quality control (QC) process will be essential to groups operating high-throughput pipelines for DNA construction.
incaRNAfbinv: a web server for the fragment-based design of RNA sequences
Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny
2016-01-01
Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893
Simple sequence repeat marker loci discovery using SSR primer.
Robinson, Andrew J; Love, Christopher G; Batley, Jacqueline; Barker, Gary; Edwards, David
2004-06-12
Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. With the increase in the availability of DNA sequence information, an automated process to identify and design PCR primers for amplification of SSR loci would be a useful tool in plant breeding programs. We report an application that integrates SPUTNIK, an SSR repeat finder, with Primer3, a PCR primer design program, into one pipeline tool, SSR Primer. On submission of multiple FASTA formatted sequences, the script screens each sequence for SSRs using SPUTNIK. The results are parsed to Primer3 for locus-specific primer design. The script makes use of a Web-based interface, enabling remote use. This program has been written in PERL and is freely available for non-commercial users by request from the authors. The Web-based version may be accessed at http://hornbill.cspp.latrobe.edu.au/
NG6: Integrated next generation sequencing storage and processing environment.
Mariette, Jérôme; Escudié, Frédéric; Allias, Nicolas; Salin, Gérald; Noirot, Céline; Thomas, Sylvain; Klopp, Christophe
2012-09-09
Next generation sequencing platforms are now well implanted in sequencing centres and some laboratories. Upcoming smaller scale machines such as the 454 junior from Roche or the MiSeq from Illumina will increase the number of laboratories hosting a sequencer. In such a context, it is important to provide these teams with an easily manageable environment to store and process the produced reads. We describe a user-friendly information system able to manage large sets of sequencing data. It includes, on one hand, a workflow environment already containing pipelines adapted to different input formats (sff, fasta, fastq and qseq), different sequencers (Roche 454, Illumina HiSeq) and various analyses (quality control, assembly, alignment, diversity studies,…) and, on the other hand, a secured web site giving access to the results. The connected user will be able to download raw and processed data and browse through the analysis result statistics. The provided workflows can easily be modified or extended and new ones can be added. Ergatis is used as a workflow building, running and monitoring system. The analyses can be run locally or in a cluster environment using Sun Grid Engine. NG6 is a complete information system designed to answer the needs of a sequencing platform. It provides a user-friendly interface to process, store and download high-throughput sequencing data.
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
ERIC Educational Resources Information Center
Onaral, Banu; And Others
This report describes the development of a Drexel University electrical and computer engineering course on digital filter design that used interactive computing and graphics, and was one of three courses in a senior-level sequence on digital signal processing (DSP). Interactive and digital analysis/design routines and the interconnection of these…
Attractors in Sequence Space: Agent-Based Exploration of MHC I Binding Peptides.
Jäger, Natalie; Wisniewska, Joanna M; Hiss, Jan A; Freier, Anja; Losch, Florian O; Walden, Peter; Wrede, Paul; Schneider, Gisbert
2010-01-12
Ant Colony Optimization (ACO) is a meta-heuristic that utilizes a computational analogue of ant trail pheromones to solve combinatorial optimization problems. The size of the ant colony and the representation of the ants' pheromone trails is unique referring to the given optimization problem. In the present study, we employed ACO to generate novel peptides that stabilize MHC I protein on the plasma membrane of a murine lymphoma cell line. A jury of feedforward neural network classifiers served as fitness function for peptide design by ACO. Bioactive murine MHC I H-2K(b) stabilizing as well as nonstabilizing octapeptides were designed, synthesized and tested. These peptides reveal residue motifs that are relevant for MHC I receptor binding. We demonstrate how the performance of the implemented ACO algorithm depends on the colony size and the size of the search space. The actual peptide design process by ACO constitutes a search path in sequence space that can be visualized as trajectories on a self-organizing map (SOM). By projecting the sequence space on a SOM we visualize the convergence of the different solutions that emerge during the optimization process in sequence space. The SOM representation reveals attractors in sequence space for MHC I binding peptides. The combination of ACO and SOM enables systematic peptide optimization. This technique allows for the rational design of various types of bioactive peptides with minimal experimental effort. Here, we demonstrate its successful application to the design of MHC-I binding and nonbinding peptides which exhibit substantial bioactivity in a cell-based assay. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The sequence measurement system of the IR camera
NASA Astrophysics Data System (ADS)
Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo
2011-08-01
Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.
NASA Astrophysics Data System (ADS)
Hernández, María Isabel; Couso, Digna; Pintó, Roser
2015-04-01
The study we have carried out aims to characterize 15- to 16-year-old students' learning progressions throughout the implementation of a teaching-learning sequence on the acoustic properties of materials. Our purpose is to better understand students' modeling processes about this topic and to identify how the instructional design and actual enactment influences students' learning progressions. This article presents the design principles which elicit the structure and types of modeling and inquiry activities designed to promote students' development of three conceptual models. Some of these activities are enhanced by the use of ICT such as sound level meters connected to data capture systems, which facilitate the measurement of the intensity level of sound emitted by a sound source and transmitted through different materials. Framing this study within the design-based research paradigm, it consists of the experimentation of the designed teaching sequence with two groups of students ( n = 29) in their science classes. The analysis of students' written productions together with classroom observations of the implementation of the teaching sequence allowed characterizing students' development of the conceptual models. Moreover, we could evidence the influence of different modeling and inquiry activities on students' development of the conceptual models, identifying those that have a major impact on students' modeling processes. Having evidenced different levels of development of each conceptual model, our results have been interpreted in terms of the attributes of each conceptual model, the distance between students' preliminary mental models and the intended conceptual models, and the instructional design and enactment.
Sleep Does Not Enhance Motor Sequence Learning
ERIC Educational Resources Information Center
Rickard, Timothy C.; Cai, Denise J.; Rieth, Cory A.; Jones, Jason; Ard, M. Colin
2008-01-01
Improvements in motor sequence performance have been observed after a delay involving sleep. This finding has been taken as evidence for an active sleep consolidation process that enhances subsequent performance. In a review of this literature, however, the authors observed 4 aspects of data analyses and experimental design that could lead to…
NASA Technical Reports Server (NTRS)
Khan, Gufran Sayeed; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The presentation includes grazing incidence X-ray optics, motivation and challenges, mid spatial frequency generation in cylindrical polishing, design considerations for polishing lap, simulation studies and experimental results, future scope, and summary. Topics include current status of replication optics technology, cylindrical polishing process using large size polishing lap, non-conformance of polishin lap to the optics, development of software and polishing machine, deterministic prediction of polishing, polishing experiment under optimum conditions, and polishing experiment based on known error profile. Future plans include determination of non-uniformity in the polishing lap compliance, development of a polishing sequence based on a known error profile of the specimen, software for generating a mandrel polishing sequence, design an development of a flexible polishing lap, and computer controlled localized polishing process.
Digital analyzer for point processes based on first-in-first-out memories
NASA Astrophysics Data System (ADS)
Basano, Lorenzo; Ottonello, Pasquale; Schiavi, Enore
1992-06-01
We present an entirely new version of a multipurpose instrument designed for the statistical analysis of point processes, especially those characterized by high bunching. A long sequence of pulses can be recorded in the RAM bank of a personal computer via a suitably designed front end which employs a pair of first-in-first-out (FIFO) memories; these allow one to build an analyzer that, besides being simpler from the electronic point of view, is capable of sustaining much higher intensity fluctuations of the point process. The overflow risk of the device is evaluated by treating the FIFO pair as a queueing system. The apparatus was tested using both a deterministic signal and a sequence of photoelectrons obtained from laser light scattered by random surfaces.
Program Helps Decompose Complicated Design Problems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.
1993-01-01
Time saved by intelligent decomposition into smaller, interrelated problems. DeMAID is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Displays modules in N x N matrix format. Requires investment of time to generate and refine list of modules for input, it saves considerable amount of money and time in total design process, particularly new design problems in which ordering of modules has not been defined. Program also implemented to examine assembly-line process or ordering of tasks and milestones.
Chiu, Kuo Ping; Wong, Chee-Hong; Chen, Qiongyu; Ariyaratne, Pramila; Ooi, Hong Sain; Wei, Chia-Lin; Sung, Wing-Kin Ken; Ruan, Yijun
2006-08-25
We recently developed the Paired End diTag (PET) strategy for efficient characterization of mammalian transcriptomes and genomes. The paired end nature of short PET sequences derived from long DNA fragments raised a new set of bioinformatics challenges, including how to extract PETs from raw sequence reads, and correctly yet efficiently map PETs to reference genome sequences. To accommodate and streamline data analysis of the large volume PET sequences generated from each PET experiment, an automated PET data process pipeline is desirable. We designed an integrated computation program package, PET-Tool, to automatically process PET sequences and map them to the genome sequences. The Tool was implemented as a web-based application composed of four modules: the Extractor module for PET extraction; the Examiner module for analytic evaluation of PET sequence quality; the Mapper module for locating PET sequences in the genome sequences; and the Project Manager module for data organization. The performance of PET-Tool was evaluated through the analyses of 2.7 million PET sequences. It was demonstrated that PET-Tool is accurate and efficient in extracting PET sequences and removing artifacts from large volume dataset. Using optimized mapping criteria, over 70% of quality PET sequences were mapped specifically to the genome sequences. With a 2.4 GHz LINUX machine, it takes approximately six hours to process one million PETs from extraction to mapping. The speed, accuracy, and comprehensiveness have proved that PET-Tool is an important and useful component in PET experiments, and can be extended to accommodate other related analyses of paired-end sequences. The Tool also provides user-friendly functions for data quality check and system for multi-layer data management.
Designing robust watermark barcodes for multiplex long-read sequencing.
Ezpeleta, Joaquín; Krsticevic, Flavia J; Bulacio, Pilar; Tapia, Elizabeth
2017-03-15
To attain acceptable sample misassignment rates, current approaches to multiplex single-molecule real-time sequencing require upstream quality improvement, which is obtained from multiple passes over the sequenced insert and significantly reduces the effective read length. In order to fully exploit the raw read length on multiplex applications, robust barcodes capable of dealing with the full single-pass error rates are needed. We present a method for designing sequencing barcodes that can withstand a large number of insertion, deletion and substitution errors and are suitable for use in multiplex single-molecule real-time sequencing. The manuscript focuses on the design of barcodes for full-length single-pass reads, impaired by challenging error rates in the order of 11%. The proposed barcodes can multiplex hundreds or thousands of samples while achieving sample misassignment probabilities as low as 10-7 under the above conditions, and are designed to be compatible with chemical constraints imposed by the sequencing process. Software tools for constructing watermark barcode sets and demultiplexing barcoded reads, together with example sets of barcodes and synthetic barcoded reads, are freely available at www.cifasis-conicet.gov.ar/ezpeleta/NS-watermark . ezpeleta@cifasis-conicet.gov.ar. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A multiplex primer design algorithm for target amplification of continuous genomic regions.
Ozturk, Ahmet Rasit; Can, Tolga
2017-06-19
Targeted Next Generation Sequencing (NGS) assays are cost-efficient and reliable alternatives to Sanger sequencing. For sequencing of very large set of genes, the target enrichment approach is suitable. However, for smaller genomic regions, the target amplification method is more efficient than both the target enrichment method and Sanger sequencing. The major difficulty of the target amplification method is the preparation of amplicons, regarding required time, equipment, and labor. Multiplex PCR (MPCR) is a good solution for the mentioned problems. We propose a novel method to design MPCR primers for a continuous genomic region, following the best practices of clinically reliable PCR design processes. On an experimental setup with 48 different combinations of factors, we have shown that multiple parameters might effect finding the first feasible solution. Increasing the length of the initial primer candidate selection sequence gives better results whereas waiting for a longer time to find the first feasible solution does not have a significant impact. We generated MPCR primer designs for the HBB whole gene, MEFV coding regions, and human exons between 2000 bp to 2100 bp-long. Our benchmarking experiments show that the proposed MPCR approach is able produce reliable NGS assay primers for a given sequence in a reasonable amount of time.
Bedbrook, Claire N; Yang, Kevin K; Rice, Austin J; Gradinaru, Viviana; Arnold, Frances H
2017-10-01
There is growing interest in studying and engineering integral membrane proteins (MPs) that play key roles in sensing and regulating cellular response to diverse external signals. A MP must be expressed, correctly inserted and folded in a lipid bilayer, and trafficked to the proper cellular location in order to function. The sequence and structural determinants of these processes are complex and highly constrained. Here we describe a predictive, machine-learning approach that captures this complexity to facilitate successful MP engineering and design. Machine learning on carefully-chosen training sequences made by structure-guided SCHEMA recombination has enabled us to accurately predict the rare sequences in a diverse library of channelrhodopsins (ChRs) that express and localize to the plasma membrane of mammalian cells. These light-gated channel proteins of microbial origin are of interest for neuroscience applications, where expression and localization to the plasma membrane is a prerequisite for function. We trained Gaussian process (GP) classification and regression models with expression and localization data from 218 ChR chimeras chosen from a 118,098-variant library designed by SCHEMA recombination of three parent ChRs. We use these GP models to identify ChRs that express and localize well and show that our models can elucidate sequence and structure elements important for these processes. We also used the predictive models to convert a naturally occurring ChR incapable of mammalian localization into one that localizes well.
Rice, Austin J.; Gradinaru, Viviana; Arnold, Frances H.
2017-01-01
There is growing interest in studying and engineering integral membrane proteins (MPs) that play key roles in sensing and regulating cellular response to diverse external signals. A MP must be expressed, correctly inserted and folded in a lipid bilayer, and trafficked to the proper cellular location in order to function. The sequence and structural determinants of these processes are complex and highly constrained. Here we describe a predictive, machine-learning approach that captures this complexity to facilitate successful MP engineering and design. Machine learning on carefully-chosen training sequences made by structure-guided SCHEMA recombination has enabled us to accurately predict the rare sequences in a diverse library of channelrhodopsins (ChRs) that express and localize to the plasma membrane of mammalian cells. These light-gated channel proteins of microbial origin are of interest for neuroscience applications, where expression and localization to the plasma membrane is a prerequisite for function. We trained Gaussian process (GP) classification and regression models with expression and localization data from 218 ChR chimeras chosen from a 118,098-variant library designed by SCHEMA recombination of three parent ChRs. We use these GP models to identify ChRs that express and localize well and show that our models can elucidate sequence and structure elements important for these processes. We also used the predictive models to convert a naturally occurring ChR incapable of mammalian localization into one that localizes well. PMID:29059183
Ong, Hui San; Rahim, Mohd Syafiq; Firdaus-Raih, Mohd; Ramlan, Effirul Ikhwan
2015-01-01
The unique programmability of nucleic acids offers alternative in constructing excitable and functional nanostructures. This work introduces an autonomous protocol to construct DNA Tetris shapes (L-Shape, B-Shape, T-Shape and I-Shape) using modular DNA blocks. The protocol exploits the rich number of sequence combinations available from the nucleic acid alphabets, thus allowing for diversity to be applied in designing various DNA nanostructures. Instead of a deterministic set of sequences corresponding to a particular design, the protocol promotes a large pool of DNA shapes that can assemble to conform to any desired structures. By utilising evolutionary programming in the design stage, DNA blocks are subjected to processes such as sequence insertion, deletion and base shifting in order to enrich the diversity of the resulting shapes based on a set of cascading filters. The optimisation algorithm allows mutation to be exerted indefinitely on the candidate sequences until these sequences complied with all the four fitness criteria. Generated candidates from the protocol are in agreement with the filter cascades and thermodynamic simulation. Further validation using gel electrophoresis indicated the formation of the designed shapes. Thus, supporting the plausibility of constructing DNA nanostructures in a more hierarchical, modular, and interchangeable manner.
The Design and Analysis of Transposon-Insertion Sequencing Experiments
Chao, Michael C.; Abel, Sören; Davis, Brigid M.; Waldor, Matthew K.
2016-01-01
Preface Transposon-insertion sequencing (TIS) is a powerful approach that can be widely applied to genome-wide definition of loci that are required for growth in diverse conditions. However, experimental design choices and stochastic biological processes can heavily influence the results of TIS experiments and affect downstream statistical analysis. Here, we discuss TIS experimental parameters and how these factors relate to the benefits and limitations of the various statistical frameworks that can be applied to computational analysis of TIS data. PMID:26775926
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1982-01-01
Restructuring research objectives from a technical readiness demonstration program to an investigation of high risk, high payoff activities associated with producing photovoltaic modules using non-CZ sheet material is reported. Deletion of the module frame in favor of a frameless design, and modification in cell series parallel electrical interconnect configuration are reviewed. A baseline process sequence was identified for the fabrication of modules using the selected dendritic web sheet material, and economic evaluations of the sequence were completed.
Integrated aerodynamic-structural design of a forward-swept transport wing
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Grossman, Bernard; Kao, Pi-Jen; Polen, David M.; Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The introduction of composite materials is having a profound effect on aircraft design. Since these materials permit the designer to tailor material properties to improve structural, aerodynamic and acoustic performance, they require an integrated multidisciplinary design process. Futhermore, because of the complexity of the design process, numerical optimization methods are required. The utilization of integrated multidisciplinary design procedures for improving aircraft design is not currently feasible because of software coordination problems and the enormous computational burden. Even with the expected rapid growth of supercomputers and parallel architectures, these tasks will not be practical without the development of efficient methods for cross-disciplinary sensitivities and efficient optimization procedures. The present research is part of an on-going effort which is focused on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration. A sequence of integrated wing design procedures has been developed in order to investigate various aspects of the design process.
Registry in a tube: multiplexed pools of retrievable parts for genetic design space exploration.
Woodruff, Lauren B A; Gorochowski, Thomas E; Roehner, Nicholas; Mikkelsen, Tarjei S; Densmore, Douglas; Gordon, D Benjamin; Nicol, Robert; Voigt, Christopher A
2017-02-17
Genetic designs can consist of dozens of genes and hundreds of genetic parts. After evaluating a design, it is desirable to implement changes without the cost and burden of starting the construction process from scratch. Here, we report a two-step process where a large design space is divided into deep pools of composite parts, from which individuals are retrieved and assembled to build a final construct. The pools are built via multiplexed assembly and sequenced using next-generation sequencing. Each pool consists of ∼20 Mb of up to 5000 unique and sequence-verified composite parts that are barcoded for retrieval by PCR. This approach is applied to a 16-gene nitrogen fixation pathway, which is broken into pools containing a total of 55 848 composite parts (71.0 Mb). The pools encompass an enormous design space (1043 possible 23 kb constructs), from which an algorithm-guided 192-member 4.5 Mb library is built. Next, all 1030 possible genetic circuits based on 10 repressors (NOR/NOT gates) are encoded in pools where each repressor is fused to all permutations of input promoters. These demonstrate that multiplexing can be applied to encompass entire design spaces from which individuals can be accessed and evaluated. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Application of genetic algorithm in integrated setup planning and operation sequencing
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen
2011-01-01
Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.
A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.
Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei
2014-09-19
Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kobayashi, Takashi; Komoda, Norihisa
The traditional business process design methods, in which the usecase is the most typical, have no useful framework to design the activity sequence with. Therefore, the design efficiency and quality vary widely according to the designer’s experience and skill. In this paper, to solve this problem, we propose the business events and their state transition model (a basic business event model) based on the language/action perspective, which is the result in the cognitive science domain. In the business process design, using this model, we decide event occurrence conditions so that every event synchronizes with each other. We also propose the design pattern to decide the event occurrence condition (a business event improvement strategy). Lastly, we apply the business process design method based on the business event model and the business event improvement strategy to the credit card issue process and estimate its effect.
A Project Course Sequence in Innovation and Commercialization of Medical Devices.
Eberhardt, Alan W; Tillman, Shea; Kirkland, Brandon; Sherrod, Brandon
2017-07-01
There exists a need for educational processes in which students gain experience with design and commercialization of medical devices. This manuscript describes the implementation of, and assessment results from, the first year offering of a project course sequence in Master of Engineering (MEng) in Design and Commercialization at our institution. The three-semester course sequence focused on developing and applying hands-on skills that contribute to product development to address medical device needs found within our university hospital and local community. The first semester integrated computer-aided drawing (CAD) as preparation for manufacturing of device-related components (hand machining, computer numeric control (CNC), three-dimensional (3D) printing, and plastics molding), followed by an introduction to microcontrollers (MCUs) and printed circuit boards (PCBs) for associated electronics and control systems. In the second semester, the students applied these skills on a unified project, working together to construct and test multiple weighing scales for wheelchair users. In the final semester, the students applied industrial design concepts to four distinct device designs, including user and context reassessment, human factors (functional and aesthetic) design refinement, and advanced visualization for commercialization. The assessment results are described, along with lessons learned and plans for enhancement of the course sequence.
26 CFR 1.197-2 - Amortization of goodwill and certain other intangibles.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., process, design, pattern, know-how, format, package design, computer software (as defined in paragraph (c... agreement that provides one of the parties to the agreement with the right to distribute, sell, or provide... any program or routine (that is, any sequence of machine-readable code) that is designed to cause a...
Launch Vehicle Design Process: Characterization, Technical Integration, and Lessons Learned
NASA Technical Reports Server (NTRS)
Blair, J. C.; Ryan, R. S.; Schutzenhofer, L. A.; Humphries, W. R.
2001-01-01
Engineering design is a challenging activity for any product. Since launch vehicles are highly complex and interconnected and have extreme energy densities, their design represents a challenge of the highest order. The purpose of this document is to delineate and clarify the design process associated with the launch vehicle for space flight transportation. The goal is to define and characterize a baseline for the space transportation design process. This baseline can be used as a basis for improving effectiveness and efficiency of the design process. The baseline characterization is achieved via compartmentalization and technical integration of subsystems, design functions, and discipline functions. First, a global design process overview is provided in order to show responsibility, interactions, and connectivity of overall aspects of the design process. Then design essentials are delineated in order to emphasize necessary features of the design process that are sometimes overlooked. Finally the design process characterization is presented. This is accomplished by considering project technical framework, technical integration, process description (technical integration model, subsystem tree, design/discipline planes, decision gates, and tasks), and the design sequence. Also included in the document are a snapshot relating to process improvements, illustrations of the process, a survey of recommendations from experienced practitioners in aerospace, lessons learned, references, and a bibliography.
Program Helps Decompose Complex Design Systems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Hall, Laura E.
1994-01-01
DeMAID (A Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problem. Groups modular subsystems on basis of interactions among them. Saves considerable money and time in total design process, particularly in new design problem in which order of modules has not been defined. Available in two machine versions: Macintosh and Sun.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
AMPLISAS: a web server for multilocus genotyping using next-generation amplicon sequencing data.
Sebastian, Alvaro; Herdegen, Magdalena; Migalska, Magdalena; Radwan, Jacek
2016-03-01
Next-generation sequencing (NGS) technologies are revolutionizing the fields of biology and medicine as powerful tools for amplicon sequencing (AS). Using combinations of primers and barcodes, it is possible to sequence targeted genomic regions with deep coverage for hundreds, even thousands, of individuals in a single experiment. This is extremely valuable for the genotyping of gene families in which locus-specific primers are often difficult to design, such as the major histocompatibility complex (MHC). The utility of AS is, however, limited by the high intrinsic sequencing error rates of NGS technologies and other sources of error such as polymerase amplification or chimera formation. Correcting these errors requires extensive bioinformatic post-processing of NGS data. Amplicon Sequence Assignment (AMPLISAS) is a tool that performs analysis of AS results in a simple and efficient way, while offering customization options for advanced users. AMPLISAS is designed as a three-step pipeline consisting of (i) read demultiplexing, (ii) unique sequence clustering and (iii) erroneous sequence filtering. Allele sequences and frequencies are retrieved in excel spreadsheet format, making them easy to interpret. AMPLISAS performance has been successfully benchmarked against previously published genotyped MHC data sets obtained with various NGS technologies. © 2015 John Wiley & Sons Ltd.
Molecular Structure and Sequence in Complex Coacervates
NASA Astrophysics Data System (ADS)
Sing, Charles; Lytle, Tyler; Madinya, Jason; Radhakrishna, Mithun
Oppositely-charged polyelectrolytes in aqueous solution can undergo associative phase separation, in a process known as complex coacervation. This results in a polyelectrolyte-dense phase (coacervate) and polyelectrolyte-dilute phase (supernatant). There remain challenges in understanding this process, despite a long history in polymer physics. We use Monte Carlo simulation to demonstrate that molecular features (charge spacing, size) play a crucial role in governing the equilibrium in coacervates. We show how these molecular features give rise to strong monomer sequence effects, due to a combination of counterion condensation and correlation effects. We distinguish between structural and sequence-based correlations, which can be designed to tune the phase diagram of coacervation. Sequence effects further inform the physical understanding of coacervation, and provide the basis for new coacervation models that take monomer-level features into account.
A high-throughput Sanger strategy for human mitochondrial genome sequencing
2013-01-01
Background A population reference database of complete human mitochondrial genome (mtGenome) sequences is needed to enable the use of mitochondrial DNA (mtDNA) coding region data in forensic casework applications. However, the development of entire mtGenome haplotypes to forensic data quality standards is difficult and laborious. A Sanger-based amplification and sequencing strategy that is designed for automated processing, yet routinely produces high quality sequences, is needed to facilitate high-volume production of these mtGenome data sets. Results We developed a robust 8-amplicon Sanger sequencing strategy that regularly produces complete, forensic-quality mtGenome haplotypes in the first pass of data generation. The protocol works equally well on samples representing diverse mtDNA haplogroups and DNA input quantities ranging from 50 pg to 1 ng, and can be applied to specimens of varying DNA quality. The complete workflow was specifically designed for implementation on robotic instrumentation, which increases throughput and reduces both the opportunities for error inherent to manual processing and the cost of generating full mtGenome sequences. Conclusions The described strategy will assist efforts to generate complete mtGenome haplotypes which meet the highest data quality expectations for forensic genetic and other applications. Additionally, high-quality data produced using this protocol can be used to assess mtDNA data developed using newer technologies and chemistries. Further, the amplification strategy can be used to enrich for mtDNA as a first step in sample preparation for targeted next-generation sequencing. PMID:24341507
Becságh, Péter; Szakács, Orsolya
2014-10-01
During diagnostic workflow when detecting sequence alterations, sometimes it is important to design an algorithm that includes screening and direct tests in combination. Normally the use of direct test, which is mainly sequencing, is limited. There is an increased need for effective screening tests, with "closed tube" during the whole process and therefore decreasing the risk of PCR product contamination. The aim of this study was to design such a closed tube, detection probe based screening assay to detect different kind of sequence alterations in the exon 11 of the human c-kit gene region. Inside this region there are variable possible deletions and single nucleotide changes. During assay setup, more probe chemistry formats were screened and tested. After some optimization steps the taqman probe format was selected.
The PARIGA server for real time filtering and analysis of reciprocal BLAST results.
Orsini, Massimiliano; Carcangiu, Simone; Cuccuru, Gianmauro; Uva, Paolo; Tramontano, Anna
2013-01-01
BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.
In vitro selection using a dual RNA library that allows primerless selection
Jarosch, Florian; Buchner, Klaus; Klussmann, Sven
2006-01-01
High affinity target-binding aptamers are identified from random oligonucleotide libraries by an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX). Since the SELEX process includes a PCR amplification step the randomized region of the oligonucleotide libraries need to be flanked by two fixed primer binding sequences. These primer binding sites are often difficult to truncate because they may be necessary to maintain the structure of the aptamer or may even be part of the target binding motif. We designed a novel type of RNA library that carries fixed sequences which constrain the oligonucleotides into a partly double-stranded structure, thereby minimizing the risk that the primer binding sequences become part of the target-binding motif. Moreover, the specific design of the library including the use of tandem RNA Polymerase promoters allows the selection of oligonucleotides without any primer binding sequences. The library was used to select aptamers to the mirror-image peptide of ghrelin. Ghrelin is a potent stimulator of growth-hormone release and food intake. After selection, the identified aptamer sequences were directly synthesized in their mirror-image configuration. The final 44 nt-Spiegelmer, named NOX-B11-3, blocks ghrelin action in a cell culture assay displaying an IC50 of 4.5 nM at 37°C. PMID:16855281
Development of Pulsed Processes for the Manufacture of Solar Cells
NASA Technical Reports Server (NTRS)
1979-01-01
The development status of the process based upon ion implantation for the introduction of junctions and back surface fields is described. A process sequence is presented employing ion implantation and pulse processing. Efforts to improve throughout and descrease process element costs for furnace annealing are described. Design studies for a modular 3,000 wafer per hour pulse processor are discussed.
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
Small scale sequence automation pays big dividends
NASA Technical Reports Server (NTRS)
Nelson, Bill
1994-01-01
Galileo sequence design and integration are supported by a suite of formal software tools. Sequence review, however, is largely a manual process with reviewers scanning hundreds of pages of cryptic computer printouts to verify sequence correctness. Beginning in 1990, a series of small, PC based sequence review tools evolved. Each tool performs a specific task but all have a common 'look and feel'. The narrow focus of each tool means simpler operation, and easier creation, testing, and maintenance. Benefits from these tools are (1) decreased review time by factors of 5 to 20 or more with a concomitant reduction in staffing, (2) increased review accuracy, and (3) excellent returns on time invested.
Program Helps Decompose Complex Design Systems
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Hall, Laura E.
1995-01-01
DeMAID (Design Manager's Aid for Intelligent Decomposition) computer program is knowledge-based software system for ordering sequence of modules and identifying possible multilevel structure for design problems such as large platforms in outer space. Groups modular subsystems on basis of interactions among them. Saves considerable amount of money and time in total design process, particularly in new design problem in which order of modules has not been defined. Originally written for design problems, also applicable to problems containing modules (processes) that take inputs and generate outputs. Available in three machine versions: Macintosh written in Symantec's Think C 3.01, Sun, and SGI IRIS in C language.
Capturing the genetic makeup of the active microbiome in situ.
Singer, Esther; Wagner, Michael; Woyke, Tanja
2017-09-01
More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and that have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions.
Capturing the genetic makeup of the active microbiome in situ
Singer, Esther; Wagner, Michael; Woyke, Tanja
2017-01-01
More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and that have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions. PMID:28574490
26 CFR 1.197-2 - Amortization of goodwill and certain other intangibles.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., process, design, pattern, know-how, format, package design, computer software (as defined in paragraph (c... section 1253(b)(1) and includes any agreement that provides one of the parties to the agreement with the... any program or routine (that is, any sequence of machine-readable code) that is designed to cause a...
Natural or Artificial: Is the Route of L2 Development Teachable?
ERIC Educational Resources Information Center
Zhang, Xian; Lantolf, James P.
2015-01-01
The current study was designed to assess the central claim of the Teachability Hypothesis (TH), a corollary of general Processability Theory (PT), which predicts instruction cannot alter posited universal, hierarchically organized psycholinguistic constraints behind PT's developmental sequences. We employed an interventional design, which adhered…
NASA Astrophysics Data System (ADS)
Ginsburger, Kévin; Poupon, Fabrice; Beaujoin, Justine; Estournet, Delphine; Matuschke, Felix; Mangin, Jean-François; Axer, Markus; Poupon, Cyril
2018-02-01
White matter is composed of irregularly packed axons leading to a structural disorder in the extra-axonal space. Diffusion MRI experiments using oscillating gradient spin echo sequences have shown that the diffusivity transverse to axons in this extra-axonal space is dependent on the frequency of the employed sequence. In this study, we observe the same frequency-dependence using 3D simulations of the diffusion process in disordered media. We design a novel white matter numerical phantom generation algorithm which constructs biomimicking geometric configurations with few design parameters, and enables to control the level of disorder of the generated phantoms. The influence of various geometrical parameters present in white matter, such as global angular dispersion, tortuosity, presence of Ranvier nodes, beading, on the extra-cellular perpendicular diffusivity frequency dependence was investigated by simulating the diffusion process in numerical phantoms of increasing complexity and fitting the resulting simulated diffusion MR signal attenuation with an adequate analytical model designed for trapezoidal OGSE sequences. This work suggests that angular dispersion and especially beading have non-negligible effects on this extracellular diffusion metrics that may be measured using standard OGSE DW-MRI clinical protocols.
Using a contextualized sensemaking model for interaction design: A case study of tumor contouring.
Aselmaa, Anet; van Herk, Marcel; Laprie, Anne; Nestle, Ursula; Götz, Irina; Wiedenmann, Nicole; Schimek-Jasch, Tanja; Picaud, Francois; Syrykh, Charlotte; Cagetti, Leonel V; Jolnerovski, Maria; Song, Yu; Goossens, Richard H M
2017-01-01
Sensemaking theories help designers understand the cognitive processes of a user when he/she performs a complicated task. This paper introduces a two-step approach of incorporating sensemaking support within the design of health information systems by: (1) modeling the sensemaking process of physicians while performing a task, and (2) identifying software interaction design requirements that support sensemaking based on this model. The two-step approach is presented based on a case study of the tumor contouring clinical task for radiotherapy planning. In the first step of the approach, a contextualized sensemaking model was developed to describe the sensemaking process based on the goal, the workflow and the context of the task. In the second step, based on a research software prototype, an experiment was conducted where three contouring tasks were performed by eight physicians respectively. Four types of navigation interactions and five types of interaction sequence patterns were identified by analyzing the gathered interaction log data from those twenty-four cases. Further in-depth study on each of the navigation interactions and interaction sequence patterns in relation to the contextualized sensemaking model revealed five main areas for design improvements to increase sensemaking support. Outcomes of the case study indicate that the proposed two-step approach was beneficial for gaining a deeper understanding of the sensemaking process during the task, as well as for identifying design requirements for better sensemaking support. Copyright © 2016. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam;
2009-01-01
The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.
Sequence-based design of bioactive small molecules that target precursor microRNAs.
Velagapudi, Sai Pradeep; Gallo, Steven M; Disney, Matthew D
2014-04-01
Oligonucleotides are designed to target RNA using base pairing rules, but they can be hampered by poor cellular delivery and nonspecific stimulation of the immune system. Small molecules are preferred as lead drugs or probes but cannot be designed from sequence. Herein, we describe an approach termed Inforna that designs lead small molecules for RNA from solely sequence. Inforna was applied to all human microRNA hairpin precursors, and it identified bioactive small molecules that inhibit biogenesis by binding nuclease-processing sites (44% hit rate). Among 27 lead interactions, the most avid interaction is between a benzimidazole (1) and precursor microRNA-96. Compound 1 selectively inhibits biogenesis of microRNA-96, upregulating a protein target (FOXO1) and inducing apoptosis in cancer cells. Apoptosis is ablated when FOXO1 mRNA expression is knocked down by an siRNA, validating compound selectivity. Markedly, microRNA profiling shows that 1 only affects microRNA-96 biogenesis and is at least as selective as an oligonucleotide.
Sequence-based design of bioactive small molecules that target precursor microRNAs
Velagapudi, Sai Pradeep; Gallo, Steven M.; Disney, Matthew D.
2014-01-01
Oligonucleotides are designed to target RNA using base pairing rules, however, they are hampered by poor cellular delivery and non-specific stimulation of the immune system. Small molecules are preferred as lead drugs or probes, but cannot be designed from sequence. Herein, we describe an approach termed Inforna that designs lead small molecules for RNA from solely sequence. Inforna was applied to all human microRNA precursors and identified bioactive small molecules that inhibit biogenesis by binding to nuclease processing sites (41% hit rate). Amongst 29 lead interactions, the most avid interaction is between a benzimidazole (1) and precursor microRNA-96. Compound 1 selectively inhibits biogenesis of microRNA-96, upregulating a protein target (FOXO1) and inducing apoptosis in cancer cells. Apoptosis is ablated when FOXO1 mRNA expression is knocked down by an siRNA, validating compound selectivity. Importantly, microRNA profiling shows that 1 only significantly effects microRNA-96 biogenesis and is more selective than an oligonucleotide. PMID:24509821
Discrete State Change Model of Manufacturing Quality to Aid Assembly Process Design
NASA Astrophysics Data System (ADS)
Koga, Tsuyoshi; Aoyama, Kazuhiro
This paper proposes a representation model of the quality state change in an assembly process that can be used in a computer-aided process design system. In order to formalize the state change of the manufacturing quality in the assembly process, the functions, operations, and quality changes in the assembly process are represented as a network model that can simulate discrete events. This paper also develops a design method for the assembly process. The design method calculates the space of quality state change and outputs a better assembly process (better operations and better sequences) that can be used to obtain the intended quality state of the final product. A computational redesigning algorithm of the assembly process that considers the manufacturing quality is developed. The proposed method can be used to design an improved manufacturing process by simulating the quality state change. A prototype system for planning an assembly process is implemented and applied to the design of an auto-breaker assembly process. The result of the design example indicates that the proposed assembly process planning method outputs a better manufacturing scenario based on the simulation of the quality state change.
Sources of PCR-induced distortions in high-throughput sequencing data sets
Kebschull, Justus M.; Zador, Anthony M.
2015-01-01
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
ERIC Educational Resources Information Center
Jaggars, Shanna Smith; Hodara, Michelle
2011-01-01
The developmental education process, as it is typically implemented in colleges across the country, seems straightforward: underprepared students are assessed and placed into an appropriate developmental course sequence designed to prepare them for college-level work; once finished with the sequence, these students presumably then move on to…
ERIC Educational Resources Information Center
Hernández, María Isabel; Couso, Digna; Pintó, Roser
2015-01-01
The study we have carried out aims to characterize 15-to 16-year-old students' learning progressions throughout the implementation of a teaching-learning sequence on the acoustic properties of materials. Our purpose is to better understand students' modeling processes about this topic and to identify how the instructional design and actual…
PRISE2: software for designing sequence-selective PCR primers and probes.
Huang, Yu-Ting; Yang, Jiue-in; Chrobak, Marek; Borneman, James
2014-09-25
PRISE2 is a new software tool for designing sequence-selective PCR primers and probes. To achieve high level of selectivity, PRISE2 allows the user to specify a collection of target sequences that the primers are supposed to amplify, as well as non-target sequences that should not be amplified. The program emphasizes primer selectivity on the 3' end, which is crucial for selective amplification of conserved sequences such as rRNA genes. In PRISE2, users can specify desired properties of primers, including length, GC content, and others. They can interactively manipulate the list of candidate primers, to choose primer pairs that are best suited for their needs. A similar process is used to add probes to selected primer pairs. More advanced features include, for example, the capability to define a custom mismatch penalty function. PRISE2 is equipped with a graphical, user-friendly interface, and it runs on Windows, Macintosh or Linux machines. PRISE2 has been tested on two very similar strains of the fungus Dactylella oviparasitica, and it was able to create highly selective primers and probes for each of them, demonstrating the ability to create useful sequence-selective assays. PRISE2 is a user-friendly, interactive software package that can be used to design high-quality selective primers for PCR experiments. In addition to choosing primers, users have an option to add a probe to any selected primer pair, enabling design of Taqman and other primer-probe based assays. PRISE2 can also be used to design probes for FISH and other hybridization-based assays.
SEQ-REVIEW: A tool for reviewing and checking spacecraft sequences
NASA Astrophysics Data System (ADS)
Maldague, Pierre F.; El-Boushi, Mekki; Starbird, Thomas J.; Zawacki, Steven J.
1994-11-01
A key component of JPL's strategy to make space missions faster, better and cheaper is the Advanced Multi-Mission Operations System (AMMOS), a ground software intensive system currently in use and in further development. AMMOS intends to eliminate the cost of re-engineering a ground system for each new JPL mission. This paper discusses SEQ-REVIEW, a component of AMMOS that was designed to facilitate and automate the task of reviewing and checking spacecraft sequences. SEQ-REVIEW is a smart browser for inspecting files created by other sequence generation tools in the AMMOS system. It can parse sequence-related files according to a computer-readable version of a 'Software Interface Specification' (SIS), which is a standard document for defining file formats. It lets users display one or several linked files and check simple constraints using a Basic-like 'Little Language'. SEQ-REVIEW represents the first application of the Quality Function Development (QFD) method to sequence software development at JPL. The paper will show how the requirements for SEQ-REVIEW were defined and converted into a design based on object-oriented principles. The process starts with interviews of potential users, a small but diverse group that spans multiple disciplines and 'cultures'. It continues with the development of QFD matrices that related product functions and characteristics to user-demanded qualities. These matrices are then turned into a formal Software Requirements Document (SRD). The process concludes with the design phase, in which the CRC (Class, Responsibility, Collaboration) approach was used to convert requirements into a blueprint for the final product.
SEQ-REVIEW: A tool for reviewing and checking spacecraft sequences
NASA Technical Reports Server (NTRS)
Maldague, Pierre F.; El-Boushi, Mekki; Starbird, Thomas J.; Zawacki, Steven J.
1994-01-01
A key component of JPL's strategy to make space missions faster, better and cheaper is the Advanced Multi-Mission Operations System (AMMOS), a ground software intensive system currently in use and in further development. AMMOS intends to eliminate the cost of re-engineering a ground system for each new JPL mission. This paper discusses SEQ-REVIEW, a component of AMMOS that was designed to facilitate and automate the task of reviewing and checking spacecraft sequences. SEQ-REVIEW is a smart browser for inspecting files created by other sequence generation tools in the AMMOS system. It can parse sequence-related files according to a computer-readable version of a 'Software Interface Specification' (SIS), which is a standard document for defining file formats. It lets users display one or several linked files and check simple constraints using a Basic-like 'Little Language'. SEQ-REVIEW represents the first application of the Quality Function Development (QFD) method to sequence software development at JPL. The paper will show how the requirements for SEQ-REVIEW were defined and converted into a design based on object-oriented principles. The process starts with interviews of potential users, a small but diverse group that spans multiple disciplines and 'cultures'. It continues with the development of QFD matrices that related product functions and characteristics to user-demanded qualities. These matrices are then turned into a formal Software Requirements Document (SRD). The process concludes with the design phase, in which the CRC (Class, Responsibility, Collaboration) approach was used to convert requirements into a blueprint for the final product.
NASA Technical Reports Server (NTRS)
1981-01-01
Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.
Use of simulated data sets to evaluate the fidelity of metagenomic processing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavromatis, K; Ivanova, N; Barry, Kerrie
2007-01-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and twomore » sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less
Mission Operations of the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Bass, Deborah; Lauback, Sharon; Mishkin, Andrew; Limonadi, Daniel
2007-01-01
A document describes a system of processes involved in planning, commanding, and monitoring operations of the rovers Spirit and Opportunity of the Mars Exploration Rover mission. The system is designed to minimize command turnaround time, given that inherent uncertainties in terrain conditions and in successful completion of planned landed spacecraft motions preclude planning of some spacecraft activities until the results of prior activities are known by the ground-based operations team. The processes are partitioned into those (designated as tactical) that must be tied to the Martian clock and those (designated strategic) that can, without loss, be completed in a more leisurely fashion. The tactical processes include assessment of downlinked data, refinement and validation of activity plans, sequencing of commands, and integration and validation of sequences. Strategic processes include communications planning and generation of long-term activity plans. The primary benefit of this partition is to enable the tactical portion of the team to focus solely on tasks that contribute directly to meeting the deadlines for commanding the rover s each sol (1 sol = 1 Martian day) - achieving a turnaround time of 18 hours or less, while facilitating strategic team interactions with other organizations that do not work on a Mars time schedule.
Song, Youngjun; Takahashi, Tsukasa; Kim, Sejung; Heaney, Yvonne C; Warner, John; Chen, Shaochen; Heller, Michael J
2017-01-11
We demonstrate a DNA double-write process that uses UV to pattern a uniquely designed DNA write material, which produces two distinct binding identities for hybridizing two different complementary DNA sequences. The process requires no modification to the DNA by chemical reagents and allows programmed DNA self-assembly and further UV patterning in the UV exposed and nonexposed areas. Multilayered DNA patterning with hybridization of fluorescently labeled complementary DNA sequences, biotin probe/fluorescent streptavidin complexes, and DNA patterns with 500 nm line widths were all demonstrated.
Information Processing in Auditory-Visual Conflict.
ERIC Educational Resources Information Center
Henker, Barbara A.; Whalen, Carol K.
1972-01-01
The present study used a set of bimodal (auditory-visual) conflict designed specifically for the preschool child. The basic component was a match-to-sample sequence designed to reduce the often-found contaminating factors in studies with young children: failure to understand or remember instructions, inability to perform the indicator response, or…
Chang'E-3 data pre-processing system based on scientific workflow
NASA Astrophysics Data System (ADS)
tan, xu; liu, jianjun; wang, yuanyuan; yan, wei; zhang, xiaoxia; li, chunlai
2016-04-01
The Chang'E-3(CE3) mission have obtained a huge amount of lunar scientific data. Data pre-processing is an important segment of CE3 ground research and application system. With a dramatic increase in the demand of data research and application, Chang'E-3 data pre-processing system(CEDPS) based on scientific workflow is proposed for the purpose of making scientists more flexible and productive by automating data-driven. The system should allow the planning, conduct and control of the data processing procedure with the following possibilities: • describe a data processing task, include:1)define input data/output data, 2)define the data relationship, 3)define the sequence of tasks,4)define the communication between tasks,5)define mathematical formula, 6)define the relationship between task and data. • automatic processing of tasks. Accordingly, Describing a task is the key point whether the system is flexible. We design a workflow designer which is a visual environment for capturing processes as workflows, the three-level model for the workflow designer is discussed:1) The data relationship is established through product tree.2)The process model is constructed based on directed acyclic graph(DAG). Especially, a set of process workflow constructs, including Sequence, Loop, Merge, Fork are compositional one with another.3)To reduce the modeling complexity of the mathematical formulas using DAG, semantic modeling based on MathML is approached. On top of that, we will present how processed the CE3 data with CEDPS.
SNAD: Sequence Name Annotation-based Designer.
Sidorov, Igor A; Reshetov, Denis A; Gorbalenya, Alexander E
2009-08-14
A growing diversity of biological data is tagged with unique identifiers (UIDs) associated with polynucleotides and proteins to ensure efficient computer-mediated data storage, maintenance, and processing. These identifiers, which are not informative for most people, are often substituted by biologically meaningful names in various presentations to facilitate utilization and dissemination of sequence-based knowledge. This substitution is commonly done manually that may be a tedious exercise prone to mistakes and omissions. Here we introduce SNAD (Sequence Name Annotation-based Designer) that mediates automatic conversion of sequence UIDs (associated with multiple alignment or phylogenetic tree, or supplied as plain text list) into biologically meaningful names and acronyms. This conversion is directed by precompiled or user-defined templates that exploit wealth of annotation available in cognate entries of external databases. Using examples, we demonstrate how this tool can be used to generate names for practical purposes, particularly in virology. A tool for controllable annotation-based conversion of sequence UIDs into biologically meaningful names and acronyms has been developed and placed into service, fostering links between quality of sequence annotation, and efficiency of communication and knowledge dissemination among researchers.
Automatic Synthesis of UML Designs from Requirements in an Iterative Process
NASA Technical Reports Server (NTRS)
Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)
2001-01-01
The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".
NASA Technical Reports Server (NTRS)
1979-01-01
The functions performed by the systems management (SM) application software are described along with the design employed to accomplish these functions. The operational sequences (OPS) control segments and the cyclic processes they control are defined. The SM specialist function control (SPEC) segments and the display controlled 'on-demand' processes that are invoked by either an OPS or SPEC control segment as a direct result of an item entry to a display are included. Each processing element in the SM application is described including an input/output table and a structured control flow diagram. The flow through the module and other information pertinent to that process and its interfaces to other processes are included.
Construction of a scFv Library with Synthetic, Non-combinatorial CDR Diversity.
Bai, Xuelian; Shim, Hyunbo
2017-01-01
Many large synthetic antibody libraries have been designed, constructed, and successfully generated high-quality antibodies suitable for various demanding applications. While synthetic antibody libraries have many advantages such as optimized framework sequences and a broader sequence landscape than natural antibodies, their sequence diversities typically are generated by random combinatorial synthetic processes which cause the incorporation of many undesired CDR sequences. Here, we describe the construction of a synthetic scFv library using oligonucleotide mixtures that contain predefined, non-combinatorially synthesized CDR sequences. Each CDR is first inserted to a master scFv framework sequence and the resulting single-CDR libraries are subjected to a round of proofread panning. The proofread CDR sequences are assembled to produce the final scFv library with six diversified CDRs.
Accelerated design of bioconversion processes using automated microscale processing techniques.
Lye, Gary J; Ayazi-Shamlou, Parviz; Baganz, Frank; Dalby, Paul A; Woodley, John M
2003-01-01
Microscale processing techniques are rapidly emerging as a means to increase the speed of bioprocess design and reduce material requirements. Automation of these techniques can reduce labour intensity and enable a wider range of process variables to be examined. This article examines recent research on various individual microscale unit operations including microbial fermentation, bioconversion and product recovery techniques. It also explores the potential of automated whole process sequences operated in microwell formats. The power of the whole process approach is illustrated by reference to a particular bioconversion, namely the Baeyer-Villiger oxidation of bicyclo[3.2.0]hept-2-en-6-one for the production of optically pure lactones.
Interactive computer graphics system for structural sizing and analysis of aircraft structures
NASA Technical Reports Server (NTRS)
Bendavid, D.; Pipano, A.; Raibstein, A.; Somekh, E.
1975-01-01
A computerized system for preliminary sizing and analysis of aircraft wing and fuselage structures was described. The system is based upon repeated application of analytical program modules, which are interactively interfaced and sequence-controlled during the iterative design process with the aid of design-oriented graphics software modules. The entire process is initiated and controlled via low-cost interactive graphics terminals driven by a remote computer in a time-sharing mode.
Data processing system for the Sneg-2MP experiment
NASA Technical Reports Server (NTRS)
Gavrilova, Y. A.
1980-01-01
The data processing system for scientific experiments on stations of the "Prognoz" type provides for the processing sequence to be broken down into a number of consecutive stages: preliminary processing, primary processing, secondary processing. The tasks of each data processing stage are examined for an experiment designed to study gamma flashes of galactic origin and solar flares lasting from several minutes to seconds in the 20 kev to 1000 kev energy range.
Minimization In Digital Design As A Meta-Planning Problem
NASA Astrophysics Data System (ADS)
Ho, William P. C.; Wu, Jung-Gen
1987-05-01
In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.
Integral design method for simple and small Mars lander system using membrane aeroshell
NASA Astrophysics Data System (ADS)
Sakagami, Ryo; Takahashi, Ryohei; Wachi, Akifumi; Koshiro, Yuki; Maezawa, Hiroyuki; Kasai, Yasko; Nakasuka, Shinichi
2018-03-01
To execute Mars surface exploration missions, spacecraft need to overcome the difficulties of the Mars entry, descent, and landing (EDL) sequences. Previous landing missions overcame these challenges with complicated systems that could only be executed by organizations with mature technology and abundant financial resources. In this paper, we propose a novel integral design methodology for a small, simple Mars lander that is achievable even by organizations with limited technology and resources such as universities or emerging countries. We aim to design a lander (including its interplanetary cruise stage) whose size and mass are under 1 m3 and 150 kg, respectively. We adopted only two components for Mars EDL process: a "membrane aeroshell" for the Mars atmospheric entry and descent sequence and one additional mechanism for the landing sequence. The landing mechanism was selected from the following three candidates: (1) solid thrusters, (2) aluminum foam, and (3) a vented airbag. We present a reasonable design process, visualize dependencies among parameters, summarize sizing methods for each component, and propose the way to integrate these components into one system. To demonstrate the effectiveness, we applied this methodology to the actual Mars EDL mission led by the National Institute of Information and Communications Technology (NICT) and the University of Tokyo. As a result, an 80 kg class Mars lander with a 1.75 m radius membrane aeroshell and a vented airbag was designed, and the maximum landing shock that the lander will receive was 115 G.
ERIC Educational Resources Information Center
Timerman, Anthony P.; Fenrick, Angela M.; Zamis, Thomas M.
2009-01-01
A sequence of exercises for the isolation and characterization of invertase (E.C. 3.1.2.26) from baker's yeast obtained from a local grocery store is outlined. Because the enzyme is colorless, the use of colored markers and the sequence of purification steps are designed to "visualize" the process by which a colorless protein is selectively…
OfftargetFinder: a web tool for species-specific RNAi design.
Good, R T; Varghese, T; Golz, J F; Russell, D A; Papanicolaou, A; Edwards, O; Robin, C
2016-04-15
RNA interference (RNAi) technology is being developed as a weapon for pest insect control. To maximize the specificity that such an approach affords we have developed a bioinformatic web tool that searches the ever-growing arthropod transcriptome databases so that pest-specific RNAi sequences can be identified. This will help technology developers finesse the design of RNAi sequences and suggests which non-target species should be assessed in the risk assessment process. http://rnai.specifly.org crobin@unimelb.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
CEM-designer: design of custom expression microarrays in the post-ENCODE Era.
Arnold, Christian; Externbrink, Fabian; Hackermüller, Jörg; Reiche, Kristin
2014-11-10
Microarrays are widely used in gene expression studies, and custom expression microarrays are popular to monitor expression changes of a customer-defined set of genes. However, the complexity of transcriptomes uncovered recently make custom expression microarray design a non-trivial task. Pervasive transcription and alternative processing of transcripts generate a wealth of interweaved transcripts that requires well-considered probe design strategies and is largely neglected in existing approaches. We developed the web server CEM-Designer that facilitates microarray platform independent design of custom expression microarrays for complex transcriptomes. CEM-Designer covers (i) the collection and generation of a set of unique target sequences from different sources and (ii) the selection of a set of sensitive and specific probes that optimally represents the target sequences. Probe design itself is left to third party software to ensure that probes meet provider-specific constraints. CEM-Designer is available at http://designpipeline.bioinf.uni-leipzig.de. Copyright © 2014 Elsevier B.V. All rights reserved.
The Cassini Solstice Mission: Streamlining Operations by Sequencing with PIEs
NASA Technical Reports Server (NTRS)
Vandermey, Nancy; Alonge, Eleanor K.; Magee, Kari; Heventhal, William
2014-01-01
The Cassini Solstice Mission (CSM) is the second extended mission phase of the highly successful Cassini/Huygens mission to Saturn. Conducted at a much-reduced funding level, operations for the CSM have been streamlined and simplified significantly. Integration of the science timeline, which involves allocating observation time in a balanced manner to each of the five different science disciplines (with representatives from the twelve different science instruments), has long been a labor-intensive endeavor. Lessons learned from the prime mission (2004-2008) and first extended mission (Equinox mission, 2008-2010) were utilized to design a new process involving PIEs (Pre-Integrated Events) to ensure the highest priority observations for each discipline could be accomplished despite reduced work force and overall simplification of processes. Discipline-level PIE lists were managed by the Science Planning team and graphically mapped to aid timeline deconfliction meetings prior to assigning discrete segments of time to the various disciplines. Periapse segments are generally discipline-focused, with the exception of a handful of PIEs. In addition to all PIEs being documented in a spreadsheet, allocated out-of-discipline PIEs were entered into the Cassini Information Management System (CIMS) well in advance of timeline integration. The disciplines were then free to work the rest of the timeline internally, without the need for frequent interaction, debate, and negotiation with representatives from other disciplines. As a result, the number of integration meetings has been cut back extensively, freeing up workforce. The sequence implementation process was streamlined as well, combining two previous processes (and teams) into one. The new Sequence Implementation Process (SIP) schedules 22 weeks to build each 10-week-long sequence, and only 3 sequence processes overlap. This differs significantly from prime mission during which 5-week-long sequences were built in 24 weeks, with 6 overlapping processes.
Biocuration in the structure-function linkage database: the anatomy of a superfamily.
Holliday, Gemma L; Brown, Shoshana D; Akiva, Eyal; Mischel, David; Hicks, Michael A; Morris, John H; Huang, Conrad C; Meng, Elaine C; Pegg, Scott C-H; Ferrin, Thomas E; Babbitt, Patricia C
2017-01-01
With ever-increasing amounts of sequence data available in both the primary literature and sequence repositories, there is a bottleneck in annotating molecular function to a sequence. This article describes the biocuration process and methods used in the structure-function linkage database (SFLD) to help address some of the challenges. We discuss how the hierarchy within the SFLD allows us to infer detailed functional properties for functionally diverse enzyme superfamilies in which all members are homologous, conserve an aspect of their chemical function and have associated conserved structural features that enable the chemistry. Also presented is the Enzyme Structure-Function Ontology (ESFO), which has been designed to capture the relationships between enzyme sequence, structure and function that underlie the SFLD and is used to guide the biocuration processes within the SFLD. http://sfld.rbvi.ucsf.edu/. © The Author 2017. Published by Oxford University Press.
Cloud-based adaptive exon prediction for DNA analysis.
Putluri, Srinivasareddy; Zia Ur Rahman, Md; Fathima, Shaik Yasmeen
2018-02-01
Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database.
CBrowse: a SAM/BAM-based contig browser for transcriptome assembly visualization and analysis.
Li, Pei; Ji, Guoli; Dong, Min; Schmidt, Emily; Lenox, Douglas; Chen, Liangliang; Liu, Qi; Liu, Lin; Zhang, Jie; Liang, Chun
2012-09-15
To address the impending need for exploring rapidly increased transcriptomics data generated for non-model organisms, we developed CBrowse, an AJAX-based web browser for visualizing and analyzing transcriptome assemblies and contigs. Designed in a standard three-tier architecture with a data pre-processing pipeline, CBrowse is essentially a Rich Internet Application that offers many seamlessly integrated web interfaces and allows users to navigate, sort, filter, search and visualize data smoothly. The pre-processing pipeline takes the contig sequence file in FASTA format and its relevant SAM/BAM file as the input; detects putative polymorphisms, simple sequence repeats and sequencing errors in contigs and generates image, JSON and database-compatible CSV text files that are directly utilized by different web interfaces. CBowse is a generic visualization and analysis tool that facilitates close examination of assembly quality, genetic polymorphisms, sequence repeats and/or sequencing errors in transcriptome sequencing projects. CBrowse is distributed under the GNU General Public License, available at http://bioinfolab.muohio.edu/CBrowse/ liangc@muohio.edu or liangc.mu@gmail.com; glji@xmu.edu.cn Supplementary data are available at Bioinformatics online.
Real-Time Courseware Design: The LAVAC Video Sequencer[R].
ERIC Educational Resources Information Center
Toma, Tony
Teachers have acknowledged the richer learning environment and interactivity of multimedia teaching, its flexibility to different learning styles, and learner control that allows the learner to fully engage in the learning process. However, they still have problems in courseware design because their work is mainly centered on exercises and not on…
LongISLND: in silico sequencing of lengthy and noisy datatypes
Lau, Bayo; Mohiyuddin, Marghoob; Mu, John C.; Fang, Li Tai; Bani Asadi, Narges; Dallett, Carolina; Lam, Hugo Y. K.
2016-01-01
Summary: LongISLND is a software package designed to simulate sequencing data according to the characteristics of third generation, single-molecule sequencing technologies. The general software architecture is easily extendable, as demonstrated by the emulation of Pacific Biosciences (PacBio) multi-pass sequencing with P5 and P6 chemistries, producing data in FASTQ, H5, and the latest PacBio BAM format. We demonstrate its utility by downstream processing with consensus building and variant calling. Availability and Implementation: LongISLND is implemented in Java and available at http://bioinform.github.io/longislnd Contact: hugo.lam@roche.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27667791
Phonotactic Probability of Brand Names: I'd buy that!
Vitevitch, Michael S.; Donoso, Alexander J.
2011-01-01
Psycholinguistic research shows that word-characteristics influence the speed and accuracy of various language-related processes. Analogous characteristics of brand names influence the retrieval of product information and the perception of risks associated with that product. In the present experiment we examined how phonotactic probability—the frequency with which phonological segments and sequences of segments appear in a word—might influence consumer behavior. Participants rated brand names that varied in phonotactic probability on the likelihood that they would buy the product. Participants indicated that they were more likely to purchase a product if the brand name was comprised of common segments and sequences of segments rather than less common segments and sequences of segments. This result suggests that word-characteristics may influence higher-level cognitive processes, in addition to language-related processes. Furthermore, the benefits of using objective measures of word characteristics in the design of brand names are discussed. PMID:21870135
2013-04-01
Forces can be computed at specific angular positions, and geometrical parameters can be evaluated. Much higher resolution models are required, along...composition engines (C#, C++, Python, Java ) Desert operates on the CyPhy model, converting from a design space alternative structure to a set of design...consists of scripts to execute dymola, post-processing of results to create metrics, and general management of the job sequence. An earlier version created
RNA design using simulated SHAPE data.
Lotfi, Mohadeseh; Zare-Mirakabad, Fatemeh; Montaseri, Soheila
2018-05-03
It has long been established that in addition to being involved in protein translation, RNA plays essential roles in numerous other cellular processes, including gene regulation and DNA replication. Such roles are known to be dictated by higher-order structures of RNA molecules. It is therefore of prime importance to find an RNA sequence that can fold to acquire a particular function that is desirable for use in pharmaceuticals and basic research. The challenge of finding an RNA sequence for a given structure is known as the RNA design problem. Although there are several algorithms to solve this problem, they mainly consider hard constraints, such as minimum free energy, to evaluate the predicted sequences. Recently, SHAPE data has emerged as a new soft constraint for RNA secondary structure prediction. To take advantage of this new experimental constraint, we report here a new method for accurate design of RNA sequences based on their secondary structures using SHAPE data as pseudo-free energy. We then compare our algorithm with four others: INFO-RNA, ERD, MODENA and RNAifold 2.0. Our algorithm precisely predicts 26 out of 29 new sequences for the structures extracted from the Rfam dataset, while the other four algorithms predict no more than 22 out of 29. The proposed algorithm is comparable to the above algorithms on RNA-SSD datasets, where they can predict up to 33 appropriate sequences for RNA secondary structures out of 34.
Kalendar, Ruslan; Tselykh, Timofey V; Khassenov, Bekbolat; Ramanculov, Erlan M
2017-01-01
This chapter introduces the FastPCR software as an integrated tool environment for PCR primer and probe design, which predicts properties of oligonucleotides based on experimental studies of the PCR efficiency. The software provides comprehensive facilities for designing primers for most PCR applications and their combinations. These include the standard PCR as well as the multiplex, long-distance, inverse, real-time, group-specific, unique, overlap extension PCR for multi-fragments assembling cloning and loop-mediated isothermal amplification (LAMP). It also contains a built-in program to design oligonucleotide sets both for long sequence assembly by ligase chain reaction and for design of amplicons that tile across a region(s) of interest. The software calculates the melting temperature for the standard and degenerate oligonucleotides including locked nucleic acid (LNA) and other modifications. It also provides analyses for a set of primers with the prediction of oligonucleotide properties, dimer and G/C-quadruplex detection, linguistic complexity as well as a primer dilution and resuspension calculator. The program consists of various bioinformatical tools for analysis of sequences with the GC or AT skew, CG% and GA% content, and the purine-pyrimidine skew. It also analyzes the linguistic sequence complexity and performs generation of random DNA sequence as well as restriction endonucleases analysis. The program allows to find or create restriction enzyme recognition sites for coding sequences and supports the clustering of sequences. It performs efficient and complete detection of various repeat types with visual display. The FastPCR software allows the sequence file batch processing that is essential for automation. The program is available for download at http://primerdigital.com/fastpcr.html , and its online version is located at http://primerdigital.com/tools/pcr.html .
Energy Systems Integration Newsletter | Energy Systems Integration Facility
simulated sequences based on a model network. The competitive procurement process provided comparative , procurement help, design reviews, and now construction support. Miramar project support is part of integrated
T-Check in Technologies for Interoperability: Business Process Management in a Web Services Context
2008-09-01
UML Sequence Diagram) 6 Figure 3: BPMN Diagram of the Order Processing Business Process 9 Figure 4: T-Check Process for Technology Evaluation 10...Figure 5: Notional System Architecture 12 Figure 6: Flow Chart of the Order Processing Business Process 14 Figure 7: Order Processing Activities...features. Figure 3 (created with Intalio BPMS Designer [Intalio 2008]) shows a BPMN view of the Order Processing business process that is used in the
Capturing the genetic makeup of the active microbiome in situ
Singer, Esther; Wagner, Michael; Woyke, Tanja
2017-06-02
More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and thatmore » have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions.« less
Capturing the genetic makeup of the active microbiome in situ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, Esther; Wagner, Michael; Woyke, Tanja
More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and thatmore » have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions.« less
BarraCUDA - a fast short read sequence aligner using graphics processing units
2012-01-01
Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497
UniPrime2: a web service providing easier Universal Primer design.
Boutros, Robin; Stokes, Nicola; Bekaert, Michaël; Teeling, Emma C
2009-07-01
The UniPrime2 web server is a publicly available online resource which automatically designs large sets of universal primers when given a gene reference ID or Fasta sequence input by a user. UniPrime2 works by automatically retrieving and aligning homologous sequences from GenBank, identifying regions of conservation within the alignment, and generating suitable primers that can be used to amplify variable genomic regions. In essence, UniPrime2 is a suite of publicly available software packages (Blastn, T-Coffee, GramAlign, Primer3), which reduces the laborious process of primer design, by integrating these programs into a single software pipeline. Hence, UniPrime2 differs from previous primer design web services in that all steps are automated, linked, saved and phylogenetically delimited, only requiring a single user-defined gene reference ID or input sequence. We provide an overview of the web service and wet-laboratory validation of the primers generated. The system is freely accessible at: http://uniprime.batlab.eu. UniPrime2 is licenced under a Creative Commons Attribution Noncommercial-Share Alike 3.0 Licence.
NASA Technical Reports Server (NTRS)
1981-01-01
Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.
Draw-a-Scientist/Mystery Box Redux
ERIC Educational Resources Information Center
Cavallo, Ann
2007-01-01
It is important that students have the opportunity to experience the nature and processes of science for themselves. The sequence of activities presented in this paper--Draw-a-Scientist and the Mystery Box Redux--were designed to help students better understand the nature of science (NOS) and engage them in the process of scientific inquiry. These…
ERIC Educational Resources Information Center
Lai, Polly K.; Portolese, Alisha; Jacobson, Michael J.
2017-01-01
This paper presents a study that applied both "productive failure" (PF) and "authentic learning" instructional approaches in online learning activities for early-career process engineers' professional development. This study compares participants learning with either a PF (low-to-high [LH]) or a more traditional (high-to-low)…
Towards automatic planning for manufacturing generative processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALTON,TERRI L.
2000-05-24
Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from themore » original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.« less
Modeling Structure-Function Relationships in Synthetic DNA Sequences using Attribute Grammars
Cai, Yizhi; Lux, Matthew W.; Adam, Laura; Peccoud, Jean
2009-01-01
Recognizing that certain biological functions can be associated with specific DNA sequences has led various fields of biology to adopt the notion of the genetic part. This concept provides a finer level of granularity than the traditional notion of the gene. However, a method of formally relating how a set of parts relates to a function has not yet emerged. Synthetic biology both demands such a formalism and provides an ideal setting for testing hypotheses about relationships between DNA sequences and phenotypes beyond the gene-centric methods used in genetics. Attribute grammars are used in computer science to translate the text of a program source code into the computational operations it represents. By associating attributes with parts, modifying the value of these attributes using rules that describe the structure of DNA sequences, and using a multi-pass compilation process, it is possible to translate DNA sequences into molecular interaction network models. These capabilities are illustrated by simple example grammars expressing how gene expression rates are dependent upon single or multiple parts. The translation process is validated by systematically generating, translating, and simulating the phenotype of all the sequences in the design space generated by a small library of genetic parts. Attribute grammars represent a flexible framework connecting parts with models of biological function. They will be instrumental for building mathematical models of libraries of genetic constructs synthesized to characterize the function of genetic parts. This formalism is also expected to provide a solid foundation for the development of computer assisted design applications for synthetic biology. PMID:19816554
DNA polymerase preference determines PCR priming efficiency.
Pan, Wenjing; Byrne-Steele, Miranda; Wang, Chunlin; Lu, Stanley; Clemmons, Scott; Zahorchak, Robert J; Han, Jian
2014-01-30
Polymerase chain reaction (PCR) is one of the most important developments in modern biotechnology. However, PCR is known to introduce biases, especially during multiplex reactions. Recent studies have implicated the DNA polymerase as the primary source of bias, particularly initiation of polymerization on the template strand. In our study, amplification from a synthetic library containing a 12 nucleotide random portion was used to provide an in-depth characterization of DNA polymerase priming bias. The synthetic library was amplified with three commercially available DNA polymerases using an anchored primer with a random 3' hexamer end. After normalization, the next generation sequencing (NGS) results of the amplified libraries were directly compared to the unamplified synthetic library. Here, high throughput sequencing was used to systematically demonstrate and characterize DNA polymerase priming bias. We demonstrate that certain sequence motifs are preferred over others as primers where the six nucleotide sequences at the 3' end of the primer, as well as the sequences four base pairs downstream of the priming site, may influence priming efficiencies. DNA polymerases in the same family from two different commercial vendors prefer similar motifs, while another commercially available enzyme from a different DNA polymerase family prefers different motifs. Furthermore, the preferred priming motifs are GC-rich. The DNA polymerase preference for certain sequence motifs was verified by amplification from single-primer templates. We incorporated the observed DNA polymerase preference into a primer-design program that guides the placement of the primer to an optimal location on the template. DNA polymerase priming bias was characterized using a synthetic library amplification system and NGS. The characterization of DNA polymerase priming bias was then utilized to guide the primer-design process and demonstrate varying amplification efficiencies among three commercially available DNA polymerases. The results suggest that the interaction of the DNA polymerase with the primer:template junction during the initiation of DNA polymerization is very important in terms of overall amplification bias and has broader implications for both the primer design process and multiplex PCR.
NASA Technical Reports Server (NTRS)
Koskela, P. E.; Bollman, W. E.; Freeman, J. E.; Helton, M. R.; Reichert, R. J.; Travers, E. S.; Zawacki, S. J.
1973-01-01
The activities of the following members of the Navigation Team are recorded: the Science Sequence Design Group, responsible for preparing the final science sequence designs; the Advanced Sequence Planning Group, responsible for sequence planning; and the Science Recommendation Team (SRT) representatives, responsible for conducting the necessary sequence design interfaces with the teams during the mission. The interface task included science support in both advance planning and daily operations. Science sequences designed during the mission are also discussed.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
Quiet aircraft design and operational characteristics
NASA Technical Reports Server (NTRS)
Hodge, Charles G.
1991-01-01
The application of aircraft noise technology to the design and operation of aircraft is discussed. Areas of discussion include the setting of target airplane noise levels, operational considerations and their effect on noise, and the sequencing and timing of the design and development process. Primary emphasis is placed on commercial transport aircraft of the type operated by major airlines. Additionally, noise control engineering of other types of aircraft is briefly discussed.
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
NASA Astrophysics Data System (ADS)
Novianti, T.; Sadikin, M.; Widia, S.; Juniantito, V.; Arida, E. A.
2018-03-01
Development of unidentified specific gene is essential to analyze the availability these genes in biological process. Identification unidentified specific DNA of HIF 1α genes is important to analyze their contribution in tissue regeneration process in lizard tail (Hemidactylus platyurus). Bioinformatics and PCR techniques are relatively an easier method to identify an unidentified gene. The most widely used method is BLAST (Basic Local Alignment Sequence Tools) method for alignment the sequences from the other organism. BLAST technique is online software from website https://blast.ncbi.nlm.nih.gov/Blast.cgi that capable to generate the similar sequences from closest kinship to distant kindship. Gecko japonicus is a species that it has closest kinship with H. platyurus. Comparing HIF 1 α gene sequence of G. japonicus with the other species used multiple alignment methods from Mega7 software. Conserved base areas were identified using Clustal IX method. Primary DNA of HIF 1 α gene was design by Primer3 software. HIF 1α gene of lizard (H. platyurus) was successfully amplified using a real-time PCR machine by primary DNA that we had designed from Gecko japonicus. Identification unidentified gene of HIF 1a lizard has been done successfully with multiple alignment method. The study was conducted by analyzing during the growth of tail on day 1, 3, 5, 7, 10, 13 and 17 of lizard tail after autotomy. Process amplification of HIF 1α gene was described by CT value in real time PCR machine. HIF 1α expression of gene is quantified by Livak formula. Chi-square statistic test is 0.000 which means that there is a different expression of HIF 1 α gene in every growth day treatment.
Image data-processing system for solar astronomy
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.
1977-01-01
The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.
Patient perspectives on whole-genome sequencing for undiagnosed diseases.
Boeldt, Debra L; Cheung, Cynthia; Ariniello, Lauren; Darst, Burcu F; Topol, Sarah; Schork, Nicholas J; Philis-Tsimikas, Athena; Torkamani, Ali; Fortmann, Addie L; Bloss, Cinnamon S
2017-01-01
This study assessed perspectives on whole-genome sequencing (WGS) for rare disease diagnosis and the process of receiving genetic results. Semistructured interviews were conducted with adult patients and parents of minor patients affected by idiopathic diseases (n = 10 cases). Three main themes were identified through qualitative data analysis and interpretation: perceived benefits of WGS; perceived drawbacks of WGS; and perceptions of the return of results from WGS. Findings suggest that patients and their families have important perspectives on the use of WGS in diagnostic odyssey cases. These perspectives could inform clinical sequencing research study designs as well as the appropriate deployment of patient and family support services in the context of clinical genome sequencing.
LongISLND: in silico sequencing of lengthy and noisy datatypes.
Lau, Bayo; Mohiyuddin, Marghoob; Mu, John C; Fang, Li Tai; Bani Asadi, Narges; Dallett, Carolina; Lam, Hugo Y K
2016-12-15
LongISLND is a software package designed to simulate sequencing data according to the characteristics of third generation, single-molecule sequencing technologies. The general software architecture is easily extendable, as demonstrated by the emulation of Pacific Biosciences (PacBio) multi-pass sequencing with P5 and P6 chemistries, producing data in FASTQ, H5, and the latest PacBio BAM format. We demonstrate its utility by downstream processing with consensus building and variant calling. LongISLND is implemented in Java and available at http://bioinform.github.io/longislnd CONTACT: hugo.lam@roche.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening
Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus
2012-01-01
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105
NASA Astrophysics Data System (ADS)
Nomaguch, Yutaka; Fujita, Kikuo
This paper proposes a design support framework, named DRIFT (Design Rationale Integration Framework of Three layers), which dynamically captures and manages hypothesis and verification in the design process. A core of DRIFT is a three-layered design process model of action, model operation and argumentation. This model integrates various design support tools and captures design operations performed on them. Action level captures the sequence of design operations. Model operation level captures the transition of design states, which records a design snapshot over design tools. Argumentation level captures the process of setting problems and alternatives. The linkage of three levels enables to automatically and efficiently capture and manage iterative hypothesis and verification processes through design operations over design tools. In DRIFT, such a linkage is extracted through the templates of design operations, which are extracted from the patterns embeded in design tools such as Design-For-X (DFX) approaches, and design tools are integrated through ontology-based representation of design concepts. An argumentation model, gIBIS (graphical Issue-Based Information System), is used for representing dependencies among problems and alternatives. A mechanism of TMS (Truth Maintenance System) is used for managing multiple hypothetical design stages. This paper also demonstrates a prototype implementation of DRIFT and its application to a simple design problem. Further, it is concluded with discussion of some future issues.
EuPaGDT: a web tool tailored to design CRISPR guide RNAs for eukaryotic pathogens.
Peng, Duo; Tarleton, Rick
2015-10-01
Recent development of CRISPR-Cas9 genome editing has enabled highly efficient and versatile manipulation of a variety of organisms and adaptation of the CRISPR-Cas9 system to eukaryotic pathogens has opened new avenues for studying these otherwise hard to manipulate organisms. Here we describe a webtool, Eukaryotic Pathogen gRNA Design Tool (EuPaGDT; available at http://grna.ctegd.uga.edu), which identifies guide RNA (gRNA) in input gene(s) to guide users in arriving at well-informed and appropriate gRNA design for many eukaryotic pathogens. Flexibility in gRNA design, accommodating unique eukaryotic pathogen (gene and genome) attributes and high-throughput gRNA design are the main features that distinguish EuPaGDT from other gRNA design tools. In addition to employing an array of known principles to score and rank gRNAs, EuPaGDT implements an effective on-target search algorithm to identify gRNA targeting multi-gene families, which are highly represented in these pathogens and play important roles in host-pathogen interactions. EuPaGDT also identifies and scores microhomology sequences flanking each gRNA targeted cut-site; these sites are often essential for the microhomology-mediated end joining process used for double-stranded break repair in these organisms. EuPaGDT also assists users in designing single-stranded oligonucleotides for homology directed repair. In batch processing mode, EuPaGDT is able to process genome-scale sequences, enabling preparation of gRNA libraries for large-scale screening projects.
Cloud-based adaptive exon prediction for DNA analysis
Putluri, Srinivasareddy; Fathima, Shaik Yasmeen
2018-01-01
Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database. PMID:29515813
Roots, Trees, and the Forest: An Effective Schools Development Sequence.
ERIC Educational Resources Information Center
Meyers, H. W.; Carlson, Robert
Findings from a study that examined the implementation of an effective schools development process are presented in this study. The study was designed to track both implementation process objectives drawn from seven correlates of instructionally effective schools and student outcomes from 1988-90 in a small-city school district in a rural state.…
ERIC Educational Resources Information Center
Smith, Toni M.; Hjalmarson, Margret A.
2013-01-01
The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…
Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link
Rush, Bobby G.; Riley, Robert
2015-09-29
Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.
Davidsson, Marcus; Diaz-Fernandez, Paula; Schwich, Oliver D.; Torroba, Marcos; Wang, Gang; Björklund, Tomas
2016-01-01
Detailed characterization and mapping of oligonucleotide function in vivo is generally a very time consuming effort that only allows for hypothesis driven subsampling of the full sequence to be analysed. Recent advances in deep sequencing together with highly efficient parallel oligonucleotide synthesis and cloning techniques have, however, opened up for entirely new ways to map genetic function in vivo. Here we present a novel, optimized protocol for the generation of universally applicable, barcode labelled, plasmid libraries. The libraries are designed to enable the production of viral vector preparations assessing coding or non-coding RNA function in vivo. When generating high diversity libraries, it is a challenge to achieve efficient cloning, unambiguous barcoding and detailed characterization using low-cost sequencing technologies. With the presented protocol, diversity of above 3 million uniquely barcoded adeno-associated viral (AAV) plasmids can be achieved in a single reaction through a process achievable in any molecular biology laboratory. This approach opens up for a multitude of in vivo assessments from the evaluation of enhancer and promoter regions to the optimization of genome editing. The generated plasmid libraries are also useful for validation of sequencing clustering algorithms and we here validate the newly presented message passing clustering process named Starcode. PMID:27874090
Generation of control sequences for a pilot-disassembly system
NASA Astrophysics Data System (ADS)
Seliger, Guenther; Kim, Hyung-Ju; Keil, Thomas
2002-02-01
Closing the product and material cycles has emerged as a paradigm for industry in the 21st century. Disassembly plays a key role in a life cycle economy since it enables the recovery of resources. A partly automated disassembly system should adapt to a large variety of products and different degrees of devaluation. Also the amounts of products to be disassembled can vary strongly. To cope with these demands an approach to generate on-line disassembly control sequences will be presented. In order to react on these demands the technological feasibility is considered within a procedure for the generation of disassembly control sequences. Procedures are designed to find available and technologically feasible disassembly processes. The control system is formed by modularised and parameterised control units in the cell level within the entire control architecture. In the first development stage product and process analyses at the sample product washing machine were executed. Furthermore a generalized disassembly process was defined. Afterwards these processes were structured in primary and secondary functions. In the second stage the disassembly control at the technological level was investigated. Factors were the availability of the disassembly tools and the technological feasibility of the disassembly processes within the disassembly system. Technical alternative disassembly processes are determined as a result of availability of the tools and technological feasibility of processes. The fourth phase was the concept for the generation of the disassembly control sequences. The approach will be proved in a prototypical disassembly system.
ERIC Educational Resources Information Center
Armstrong, Matt; Comitz, Richard L.; Biaglow, Andrew; Lachance, Russ; Sloop, Joseph
2008-01-01
A novel approach to the Chemical Engineering curriculum sequence of courses at West Point enabled our students to experience a much more realistic design process, which more closely replicated a real world scenario. Students conduct the synthesis in the organic chemistry lab, then conduct computer modeling of the reaction with ChemCad and…
RosettaAntibodyDesign (RAbD): A general framework for computational antibody design
Adolf-Bryfogle, Jared; Kalyuzhniy, Oleks; Kubitz, Michael; Hu, Xiaozhen; Adachi, Yumiko; Schief, William R.
2018-01-01
A structural-bioinformatics-based computational methodology and framework have been developed for the design of antibodies to targets of interest. RosettaAntibodyDesign (RAbD) samples the diverse sequence, structure, and binding space of an antibody to an antigen in highly customizable protocols for the design of antibodies in a broad range of applications. The program samples antibody sequences and structures by grafting structures from a widely accepted set of the canonical clusters of CDRs (North et al., J. Mol. Biol., 406:228–256, 2011). It then performs sequence design according to amino acid sequence profiles of each cluster, and samples CDR backbones using a flexible-backbone design protocol incorporating cluster-based CDR constraints. Starting from an existing experimental or computationally modeled antigen-antibody structure, RAbD can be used to redesign a single CDR or multiple CDRs with loops of different length, conformation, and sequence. We rigorously benchmarked RAbD on a set of 60 diverse antibody–antigen complexes, using two design strategies—optimizing total Rosetta energy and optimizing interface energy alone. We utilized two novel metrics for measuring success in computational protein design. The design risk ratio (DRR) is equal to the frequency of recovery of native CDR lengths and clusters divided by the frequency of sampling of those features during the Monte Carlo design procedure. Ratios greater than 1.0 indicate that the design process is picking out the native more frequently than expected from their sampled rate. We achieved DRRs for the non-H3 CDRs of between 2.4 and 4.0. The antigen risk ratio (ARR) is the ratio of frequencies of the native amino acid types, CDR lengths, and clusters in the output decoys for simulations performed in the presence and absence of the antigen. For CDRs, we achieved cluster ARRs as high as 2.5 for L1 and 1.5 for H2. For sequence design simulations without CDR grafting, the overall recovery for the native amino acid types for residues that contact the antigen in the native structures was 72% in simulations performed in the presence of the antigen and 48% in simulations performed without the antigen, for an ARR of 1.5. For the non-contacting residues, the ARR was 1.08. This shows that the sequence profiles are able to maintain the amino acid types of these conserved, buried sites, while recovery of the exposed, contacting residues requires the presence of the antigen-antibody interface. We tested RAbD experimentally on both a lambda and kappa antibody–antigen complex, successfully improving their affinities 10 to 50 fold by replacing individual CDRs of the native antibody with new CDR lengths and clusters. PMID:29702641
RosettaAntibodyDesign (RAbD): A general framework for computational antibody design.
Adolf-Bryfogle, Jared; Kalyuzhniy, Oleks; Kubitz, Michael; Weitzner, Brian D; Hu, Xiaozhen; Adachi, Yumiko; Schief, William R; Dunbrack, Roland L
2018-04-01
A structural-bioinformatics-based computational methodology and framework have been developed for the design of antibodies to targets of interest. RosettaAntibodyDesign (RAbD) samples the diverse sequence, structure, and binding space of an antibody to an antigen in highly customizable protocols for the design of antibodies in a broad range of applications. The program samples antibody sequences and structures by grafting structures from a widely accepted set of the canonical clusters of CDRs (North et al., J. Mol. Biol., 406:228-256, 2011). It then performs sequence design according to amino acid sequence profiles of each cluster, and samples CDR backbones using a flexible-backbone design protocol incorporating cluster-based CDR constraints. Starting from an existing experimental or computationally modeled antigen-antibody structure, RAbD can be used to redesign a single CDR or multiple CDRs with loops of different length, conformation, and sequence. We rigorously benchmarked RAbD on a set of 60 diverse antibody-antigen complexes, using two design strategies-optimizing total Rosetta energy and optimizing interface energy alone. We utilized two novel metrics for measuring success in computational protein design. The design risk ratio (DRR) is equal to the frequency of recovery of native CDR lengths and clusters divided by the frequency of sampling of those features during the Monte Carlo design procedure. Ratios greater than 1.0 indicate that the design process is picking out the native more frequently than expected from their sampled rate. We achieved DRRs for the non-H3 CDRs of between 2.4 and 4.0. The antigen risk ratio (ARR) is the ratio of frequencies of the native amino acid types, CDR lengths, and clusters in the output decoys for simulations performed in the presence and absence of the antigen. For CDRs, we achieved cluster ARRs as high as 2.5 for L1 and 1.5 for H2. For sequence design simulations without CDR grafting, the overall recovery for the native amino acid types for residues that contact the antigen in the native structures was 72% in simulations performed in the presence of the antigen and 48% in simulations performed without the antigen, for an ARR of 1.5. For the non-contacting residues, the ARR was 1.08. This shows that the sequence profiles are able to maintain the amino acid types of these conserved, buried sites, while recovery of the exposed, contacting residues requires the presence of the antigen-antibody interface. We tested RAbD experimentally on both a lambda and kappa antibody-antigen complex, successfully improving their affinities 10 to 50 fold by replacing individual CDRs of the native antibody with new CDR lengths and clusters.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
Apparatus for improved DNA sequencing
Douthart, R.J.; Crowell, S.L.
1996-05-07
This invention is a means for the rapid sequencing of DNA samples. More specifically, it consists of a new design direct blotting electrophoresis unit. The DNA sequence is deposited on a membrane attached to a rotating drum. Initial data compaction is facilitated by the use of a machined multi-channeled plate called a ribbon channel plate. Each channel is an isolated mini gel system much like a gel filled capillary. The system as a whole, however, is in a slab gel like format with the advantages of uniformity and easy reusability. The system can be used in different embodiments. The drum system is unique in that after deposition the drum rotates the deposited DNA into a large non-buffer open space where processing and detection can occur. The drum can also be removed in toto to special workstations for downstream processing, multiplexing and detection. 18 figs.
Apparatus for improved DNA sequencing
Douthart, Richard J.; Crowell, Shannon L.
1996-01-01
This invention is a means for the rapid sequencing of DNA samples. More specifically, it consists of a new design direct blotting electrophoresis unit. The DNA sequence is deposited on a membrane attached to a rotating drum. Initial data compaction is facilitated by the use of a machined multi-channeled plate called a ribbon channel plate. Each channel is an isolated mini gel system much like a gel filled capillary. The system as a whole, however, is in a slab gel like format with the advantages of uniformity and easy reusability. The system can be used in different embodiments. The drum system is unique in that after deposition the drum rotates the deposited DNA into a large non-buffer open space where processing and detection can occur. The drum can also be removed in toto to special workstations for downstream processing, multiplexing and detection.
Effects of dispense equipment sequence on process start-up defects
NASA Astrophysics Data System (ADS)
Brakensiek, Nick; Sevegney, Michael
2013-03-01
Photofluid dispense systems within coater/developer tools have been designed with the intent to minimize cost of ownership to the end user. Waste and defect minimization, dispense quality and repeatability, and ease of use are all desired characteristics. One notable change within commercially available systems is the sequence in which process fluid encounters dispense pump and filtration elements. Traditionally, systems adopted a pump-first sequence, where fluid is "pushed through" a point-of-use filter just prior to dispensing on the wafer. Recently, systems configured in a pump-last scheme have become available, where fluid is "pulled through" the filter, into the pump, and then is subsequently dispensed. The present work constitutes a comparative evaluation of the two equipment sequences with regard to the aforementioned characteristics that impact cost of ownership. Additionally, removal rating and surface chemistry (i.e., hydrophilicity) of the point-of-use filter are varied in order to evaluate their influence on system start-up and defects.
Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri
2006-12-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and twomore » sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.« less
Methods for processing high-throughput RNA sequencing data.
Ares, Manuel
2014-11-03
High-throughput sequencing (HTS) methods for analyzing RNA populations (RNA-Seq) are gaining rapid application to many experimental situations. The steps in an RNA-Seq experiment require thought and planning, especially because the expense in time and materials is currently higher and the protocols are far less routine than those used for other high-throughput methods, such as microarrays. As always, good experimental design will make analysis and interpretation easier. Having a clear biological question, an idea about the best way to do the experiment, and an understanding of the number of replicates needed will make the entire process more satisfying. Whether the goal is capturing transcriptome complexity from a tissue or identifying small fragments of RNA cross-linked to a protein of interest, conversion of the RNA to cDNA followed by direct sequencing using the latest methods is a developing practice, with new technical modifications and applications appearing every day. Even more rapid are the development and improvement of methods for analysis of the very large amounts of data that arrive at the end of an RNA-Seq experiment, making considerations regarding reproducibility, validation, visualization, and interpretation increasingly important. This introduction is designed to review and emphasize a pathway of analysis from experimental design through data presentation that is likely to be successful, with the recognition that better methods are right around the corner. © 2014 Cold Spring Harbor Laboratory Press.
Molecular Characterization of Wetland Soil Bacterial Community in Constructed Mesocosms
2006-06-01
promise. In order to better understand this process and test its legitimacy, a treatment wetland was constructed at Wright-Patterson AFB, Dayton, Ohio...fruition. Dr. Smith, without your patient instruction in the process of DNA extraction, PCR amplification, cloning, sequencing, and analysis those...then, wetlands have also been designed and constructed to treat process waters from industry (Kadlec and Knight, 1996) and are being used more and
A Tentative Application Of Morphological Filters To Time-Varying Images
NASA Astrophysics Data System (ADS)
Billard, D.; Poquillon, B.
1989-03-01
In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.
Zhang, Lin; Bai, Zhitong; Ban, Heng; Liu, Ling
2015-11-21
Recent experiments have discovered very different thermal conductivities between the spider silk and the silkworm silk. Decoding the molecular mechanisms underpinning the distinct thermal properties may guide the rational design of synthetic silk materials and other biomaterials for multifunctionality and tunable properties. However, such an understanding is lacking, mainly due to the complex structure and phonon physics associated with the silk materials. Here, using non-equilibrium molecular dynamics, we demonstrate that the amino acid sequence plays a key role in the thermal conduction process through β-sheets, essential building blocks of natural silks and a variety of other biomaterials. Three representative β-sheet types, i.e. poly-A, poly-(GA), and poly-G, are shown to have distinct structural features and phonon dynamics leading to different thermal conductivities. A fundamental understanding of the sequence effects may stimulate the design and engineering of polymers and biopolymers for desired thermal properties.
Spacecraft command verification: The AI solution
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.
1990-01-01
Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.
A novel surrogate-based approach for optimal design of electromagnetic-based circuits
NASA Astrophysics Data System (ADS)
Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.
2016-02-01
A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.
Predicting DNA hybridization kinetics from sequence
NASA Astrophysics Data System (ADS)
Zhang, Jinny X.; Fang, John Z.; Duan, Wei; Wu, Lucia R.; Zhang, Angela W.; Dalchau, Neil; Yordanov, Boyan; Petersen, Rasmus; Phillips, Andrew; Zhang, David Yu
2018-01-01
Hybridization is a key molecular process in biology and biotechnology, but so far there is no predictive model for accurately determining hybridization rate constants based on sequence information. Here, we report a weighted neighbour voting (WNV) prediction algorithm, in which the hybridization rate constant of an unknown sequence is predicted based on similarity reactions with known rate constants. To construct this algorithm we first performed 210 fluorescence kinetics experiments to observe the hybridization kinetics of 100 different DNA target and probe pairs (36 nt sub-sequences of the CYCS and VEGF genes) at temperatures ranging from 28 to 55 °C. Automated feature selection and weighting optimization resulted in a final six-feature WNV model, which can predict hybridization rate constants of new sequences to within a factor of 3 with ∼91% accuracy, based on leave-one-out cross-validation. Accurate prediction of hybridization kinetics allows the design of efficient probe sequences for genomics research.
PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities.
Troshin, Peter V; Postis, Vincent Lg; Ashworth, Denise; Baldwin, Stephen A; McPherson, Michael J; Barton, Geoffrey J
2011-03-07
Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/.
Optimized smith waterman processor design for breast cancer early diagnosis
NASA Astrophysics Data System (ADS)
Nurdin, D. S.; Isa, M. N.; Ismail, R. C.; Ahmad, M. I.
2017-09-01
This paper presents an optimized design of Processing Element (PE) of Systolic Array (SA) which implements affine gap penalty Smith Waterman (SW) algorithm on the Xilinx Virtex-6 XC6VLX75T Field Programmable Gate Array (FPGA) for Deoxyribonucleic Acid (DNA) sequence alignment. The PE optimization aims to reduce PE logic resources to increase number of PEs in FPGA for higher degree of parallelism during alignment matrix computations. This is useful for aligning long DNA-based disease sequence such as Breast Cancer (BC) for early diagnosis. The optimized PE architecture has the smallest PE area with 15 slices in a PE and 776 PEs implemented in the Virtex - 6 FPGA.
An application of computer aided requirements analysis to a real time deep space system
NASA Technical Reports Server (NTRS)
Farny, A. M.; Morris, R. V.; Hartsough, C.; Callender, E. D.; Teichroew, D.; Chikofsky, E.
1981-01-01
The entire procedure of incorporating the requirements and goals of a space flight project into integrated, time ordered sequences of spacecraft commands, is called the uplink process. The Uplink Process Control Task (UPCT) was created to examine the uplink process and determine ways to improve it. The Problem Statement Language/Problem Statement Analyzer (PSL/PSA) designed to assist the designer/analyst/engineer in the preparation of specifications of an information system is used as a supporting tool to aid in the analysis. Attention is given to a definition of the uplink process, the definition of PSL/PSA, the construction of a PSA database, the value of analysis to the study of the uplink process, and the PSL/PSA lessons learned.
New Nomenclatures for Heat Treatments of Additively Manufactured Titanium Alloys
NASA Astrophysics Data System (ADS)
Baker, Andrew H.; Collins, Peter C.; Williams, James C.
2017-07-01
The heat-treatment designations and microstructure nomenclatures for many structural metallic alloys were established for traditional metals processing, such as casting, hot rolling or forging. These terms do not necessarily apply for additively manufactured (i.e., three-dimensionally printed or "3D printed") metallic structures. The heat-treatment terminology for titanium alloys generally implies the heat-treatment temperatures and their sequence relative to a thermomechanical processing step (e.g., forging, rolling). These designations include: β-processing, α + β-processing, β-annealing, duplex annealing and mill annealing. Owing to the absence of a thermomechanical processing step, these traditional designations can pose a problem when titanium alloys are first produced via additive manufacturing, and then heat-treated. This communication proposes new nomenclatures for heat treatments of additively manufactured titanium alloys, and uses the distinct microstructural features to provide a correlation between traditional nomenclature and the proposed nomenclature.
Study of mould design and forming process on advanced polymer-matrix composite complex structure
NASA Astrophysics Data System (ADS)
Li, S. J.; Zhan, L. H.; Bai, H. M.; Chen, X. P.; Zhou, Y. Q.
2015-07-01
Advanced carbon fibre-reinforced polymer-matrix composites are widely applied to aviation manufacturing field due to their outstanding performance. In this paper, the mould design and forming process of the complex composite structure were discussed in detail using the hat stiffened structure as an example. The key issues of the moulddesign were analyzed, and the corresponding solutions were also presented. The crucial control points of the forming process such as the determination of materials and stacking sequence, the temperature and pressure route of the co-curing process were introduced. In order to guarantee the forming quality of the composite hat stiffened structure, a mathematical model about the aperture of rubber mandrel was introduced. The study presented in this paper may provide some actual references for the design and manufacture of the important complex composite structures.
1988-08-01
exchanged between the cells, thus requiring existence of fast , high capacity, high availability communication channels. The same arguments indicate...mininet - loss of a cell - intermittent communications failure in the maxinet - partitioning of the maxinet or the mininet o Query decomposition. 3.3...take place. A new sequencer is selected by the timeout mechanism described above. This process Pj must set its priority to 0 in order to ensure fast
Genetic circuit design automation.
Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A
2016-04-01
Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. Copyright © 2016, American Association for the Advancement of Science.
Larson, Amy; Fair, Benjamin Jung; Pleiss, Jeffrey A
2016-06-01
Pre-mRNA splicing is an essential component of eukaryotic gene expression and is highly conserved from unicellular yeasts to humans. Here, we present the development and implementation of a sequencing-based reverse genetic screen designed to identify nonessential genes that impact pre-mRNA splicing in the fission yeast Schizosaccharomyces pombe, an organism that shares many of the complex features of splicing in higher eukaryotes. Using a custom-designed barcoding scheme, we simultaneously queried ∼3000 mutant strains for their impact on the splicing efficiency of two endogenous pre-mRNAs. A total of 61 nonessential genes were identified whose deletions resulted in defects in pre-mRNA splicing; enriched among these were factors encoding known or predicted components of the spliceosome. Included among the candidates identified here are genes with well-characterized roles in other RNA-processing pathways, including heterochromatic silencing and 3' end processing. Splicing-sensitive microarrays confirm broad splicing defects for many of these factors, revealing novel functional connections between these pathways. Copyright © 2016 Larson et al.
Larson, Amy; Fair, Benjamin Jung; Pleiss, Jeffrey A.
2016-01-01
Pre-mRNA splicing is an essential component of eukaryotic gene expression and is highly conserved from unicellular yeasts to humans. Here, we present the development and implementation of a sequencing-based reverse genetic screen designed to identify nonessential genes that impact pre-mRNA splicing in the fission yeast Schizosaccharomyces pombe, an organism that shares many of the complex features of splicing in higher eukaryotes. Using a custom-designed barcoding scheme, we simultaneously queried ∼3000 mutant strains for their impact on the splicing efficiency of two endogenous pre-mRNAs. A total of 61 nonessential genes were identified whose deletions resulted in defects in pre-mRNA splicing; enriched among these were factors encoding known or predicted components of the spliceosome. Included among the candidates identified here are genes with well-characterized roles in other RNA-processing pathways, including heterochromatic silencing and 3ʹ end processing. Splicing-sensitive microarrays confirm broad splicing defects for many of these factors, revealing novel functional connections between these pathways. PMID:27172183
Regulatory RNA design through evolutionary computation and strand displacement.
Rostain, William; Landrain, Thomas E; Rodrigo, Guillermo; Jaramillo, Alfonso
2015-01-01
The discovery and study of a vast number of regulatory RNAs in all kingdoms of life over the past decades has allowed the design of new synthetic RNAs that can regulate gene expression in vivo. Riboregulators, in particular, have been used to activate or repress gene expression. However, to accelerate and scale up the design process, synthetic biologists require computer-assisted design tools, without which riboregulator engineering will remain a case-by-case design process requiring expert attention. Recently, the design of RNA circuits by evolutionary computation and adapting strand displacement techniques from nanotechnology has proven to be suited to the automated generation of DNA sequences implementing regulatory RNA systems in bacteria. Herein, we present our method to carry out such evolutionary design and how to use it to create various types of riboregulators, allowing the systematic de novo design of genetic control systems in synthetic biology.
Programmable in vivo selection of arbitrary DNA sequences.
Ben Yehezkel, Tuval; Biezuner, Tamir; Linshiz, Gregory; Mazor, Yair; Shapiro, Ehud
2012-01-01
The extraordinary fidelity, sensory and regulatory capacity of natural intracellular machinery is generally confined to their endogenous environment. Nevertheless, synthetic bio-molecular components have been engineered to interface with the cellular transcription, splicing and translation machinery in vivo by embedding functional features such as promoters, introns and ribosome binding sites, respectively, into their design. Tapping and directing the power of intracellular molecular processing towards synthetic bio-molecular inputs is potentially a powerful approach, albeit limited by our ability to streamline the interface of synthetic components with the intracellular machinery in vivo. Here we show how a library of synthetic DNA devices, each bearing an input DNA sequence and a logical selection module, can be designed to direct its own probing and processing by interfacing with the bacterial DNA mismatch repair (MMR) system in vivo and selecting for the most abundant variant, regardless of its function. The device provides proof of concept for programmable, function-independent DNA selection in vivo and provides a unique example of a logical-functional interface of an engineered synthetic component with a complex endogenous cellular system. Further research into the design, construction and operation of synthetic devices in vivo may lead to other functional devices that interface with other complex cellular processes for both research and applied purposes.
Embedded CMOS basecalling for nanopore DNA sequencing.
Chengjie Wang; Junli Zheng; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim
2016-08-01
DNA sequencing based on nanopore sensors is now entering the marketplace. The ability to interface this technology to established CMOS microelectronics promises significant improvements in functionality and miniaturization. Among the key functions to benefit from this interface will be basecalling, the conversion of raw electronic molecular signatures to nucleotide sequence predictions. This paper presents the design and performance potential of custom CMOS base-callers embedded alongside nanopore sensors. A basecalliing architecture implemented in 32-nm technology is discussed with the ability to process the equivalent of 20 human genomes per day in real-time at a power density of 5 W/cm2 assuming a 3-mer nanopore sensor.
Lu, Emily; Elizondo-Riojas, Miguel-Angel; Chang, Jeffrey T; Volk, David E
2014-06-10
Next-generation sequencing results from bead-based aptamer libraries have demonstrated that traditional DNA/RNA alignment software is insufficient. This is particularly true for X-aptamers containing specialty bases (W, X, Y, Z, ...) that are identified by special encoding. Thus, we sought an automated program that uses the inherent design scheme of bead-based X-aptamers to create a hypothetical reference library and Markov modeling techniques to provide improved alignments. Aptaligner provides this feature as well as length error and noise level cutoff features, is parallelized to run on multiple central processing units (cores), and sorts sequences from a single chip into projects and subprojects.
DeMAID/GA an Enhanced Design Manager's Aid for Intelligent Decomposition
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial public release of DeMAID in 1989, much research has been done in the areas of decomposition, concurrent engineering, parallel processing, and process management; many new tools and techniques have emerged. Based on these recent research and development efforts, numerous enhancements have been added to DeMAID to further aid the design manager in saving both cost and time in a design cycle. The key enhancement, a genetic algorithm (GA), will be available in the next public release called DeMAID/GA. The GA sequences the design processes to minimize the cost and time in converging a solution. The major enhancements in the upgrade of DeMAID to DeMAID/GA are discussed in this paper. A sample conceptual design project is used to show how these enhancements can be applied to improve the design cycle.
Christen, Matthias; Del Medico, Luca; Christen, Heinz; Christen, Beat
2017-01-01
Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.
NASA Astrophysics Data System (ADS)
Yang, Zhiyong; Tang, Zhanwen; Xie, Yongjie; Shi, Hanqiao; Zhang, Boming; Guo, Hongjun
2018-02-01
Composite space mirror can completely replicate the high-precision surface of mould by replication process, but the actual surface accuracy of the replication composite mirror always decreases. Lamina thickness of prepreg affects the layers and layup sequence of composite space mirror, and which would affect surface accuracy of space mirror. In our research, two groups of contrasting cases through finite element analyses (FEA) and comparative experiments were studied; the effect of different lamina thicknesses of prepreg and corresponding lay-up sequences was focused as well. We describe a special analysis model, validated process and result analysis. The simulated and measured surface figures both get the same conclusion. Reducing lamina thickness of prepreg used in replicating composite space mirror is propitious to optimal design of layup sequence for fabricating composite mirror, and could improve its surface accuracy.
Kitagawa, Mamiko; Nakamura, Kosuke; Kondo, Kazunari; Ubukata, Shoji; Akiyama, Hiroshi
2014-01-01
The contamination of processed vegetable foods with genetically modified tomatoes was investigated by the use of qualitative PCR methods to detect the cauliflower mosaic virus 35S promoter (P35S) and the kanamycin resistance gene (NPTII). DNA fragments of P35S and NPTII were detected in vegetable juice samples, possibly due to contamination with the genomes of cauliflower mosaic virus infecting juice ingredients of Brassica species and soil bacteria, respectively. Therefore, to detect the transformation construct sequences of GM tomatoes, primer pairs were designed for qualitative PCR to specifically detect the border region between P35S and NPTII, and the border region between nopaline synthase gene promoter and NPTII. No amplification of the targeted sequences was observed using genomic DNA purified from the juice ingredients. The developed qualitative PCR method is considered to be a reliable tool to check contamination of products with GM tomatoes.
Computational and experimental analysis of DNA shuffling
Maheshri, Narendra; Schaffer, David V.
2003-01-01
We describe a computational model of DNA shuffling based on the thermodynamics and kinetics of this process. The model independently tracks a representative ensemble of DNA molecules and records their states at every stage of a shuffling reaction. These data can subsequently be analyzed to yield information on any relevant metric, including reassembly efficiency, crossover number, type and distribution, and DNA sequence length distributions. The predictive ability of the model was validated by comparison to three independent sets of experimental data, and analysis of the simulation results led to several unique insights into the DNA shuffling process. We examine a tradeoff between crossover frequency and reassembly efficiency and illustrate the effects of experimental parameters on this relationship. Furthermore, we discuss conditions that promote the formation of useless “junk” DNA sequences or multimeric sequences containing multiple copies of the reassembled product. This model will therefore aid in the design of optimal shuffling reaction conditions. PMID:12626764
Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.
Sakai, Ryo; Aerts, Jan
2014-01-01
The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.
Technology of welding aluminum alloys-II
NASA Technical Reports Server (NTRS)
1978-01-01
Step-by-step procedures were developed for high integrity manual and machine welding of aluminum alloys. Detailed instructions are given for each step with tables and graphs to specify materials and dimensions. Throughout work sequence, processing procedure designates manufacturing verification points and inspection points.
Mars Science Laboratory CHIMRA/IC/DRT Flight Software for Sample Acquisition and Processing
NASA Technical Reports Server (NTRS)
Kim, Won S.; Leger, Chris; Carsten, Joseph; Helmick, Daniel; Kuhn, Stephen; Redick, Richard; Trujillo, Diana
2013-01-01
The design methodologies of using sequence diagrams, multi-process functional flow diagrams, and hierarchical state machines were successfully applied in designing three MSL (Mars Science Laboratory) flight software modules responsible for handling actuator motions of the CHIMRA (Collection and Handling for In Situ Martian Rock Analysis), IC (Inlet Covers), and DRT (Dust Removal Tool) mechanisms. The methodologies were essential to specify complex interactions with other modules, support concurrent foreground and background motions, and handle various fault protections. Studying task scenarios with multi-process functional flow diagrams yielded great insight to overall design perspectives. Since the three modules require three different levels of background motion support, the methodologies presented in this paper provide an excellent comparison. All three modules are fully operational in flight.
Villard, Pierre; Malausa, Thibaut
2013-07-01
SP-Designer is an open-source program providing a user-friendly tool for the design of specific PCR primer pairs from a DNA sequence alignment containing sequences from various taxa. SP-Designer selects PCR primer pairs for the amplification of DNA from a target species on the basis of several criteria: (i) primer specificity, as assessed by interspecific sequence polymorphism in the annealing regions, (ii) the biochemical characteristics of the primers and (iii) the intended PCR conditions. SP-Designer generates tables, detailing the primer pair and PCR characteristics, and a FASTA file locating the primer sequences in the original sequence alignment. SP-Designer is Windows-compatible and freely available from http://www2.sophia.inra.fr/urih/sophia_mart/sp_designer/info_sp_designer.php. © 2013 John Wiley & Sons Ltd.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-05-01
Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-01-01
Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods. PMID:18452616
Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew
2017-12-01
Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Semisupervised Gaussian Process for Automated Enzyme Search.
Mellor, Joseph; Grigoras, Ioana; Carbonell, Pablo; Faulon, Jean-Loup
2016-06-17
Synthetic biology is today harnessing the design of novel and greener biosynthesis routes for the production of added-value chemicals and natural products. The design of novel pathways often requires a detailed selection of enzyme sequences to import into the chassis at each of the reaction steps. To address such design requirements in an automated way, we present here a tool for exploring the space of enzymatic reactions. Given a reaction and an enzyme the tool provides a probability estimate that the enzyme catalyzes the reaction. Our tool first considers the similarity of a reaction to known biochemical reactions with respect to signatures around their reaction centers. Signatures are defined based on chemical transformation rules by using extended connectivity fingerprint descriptors. A semisupervised Gaussian process model associated with the similar known reactions then provides the probability estimate. The Gaussian process model uses information about both the reaction and the enzyme in providing the estimate. These estimates were validated experimentally by the application of the Gaussian process model to a newly identified metabolite in Escherichia coli in order to search for the enzymes catalyzing its associated reactions. Furthermore, we show with several pathway design examples how such ability to assign probability estimates to enzymatic reactions provides the potential to assist in bioengineering applications, providing experimental validation to our proposed approach. To the best of our knowledge, the proposed approach is the first application of Gaussian processes dealing with biological sequences and chemicals, the use of a semisupervised Gaussian process framework is also novel in the context of machine learning applied to bioinformatics. However, the ability of an enzyme to catalyze a reaction depends on the affinity between the substrates of the reaction and the enzyme. This affinity is generally quantified by the Michaelis constant KM. Therefore, we also demonstrate using Gaussian process regression to predict KM given a substrate-enzyme pair.
Exploration of the relationship between topology and designability of conformations
NASA Astrophysics Data System (ADS)
Leelananda, Sumudu P.; Towfic, Fadi; Jernigan, Robert L.; Kloczkowski, Andrzej
2011-06-01
Protein structures are evolutionarily more conserved than sequences, and sequences with very low sequence identity frequently share the same fold. This leads to the concept of protein designability. Some folds are more designable and lots of sequences can assume that fold. Elucidating the relationship between protein sequence and the three-dimensional (3D) structure that the sequence folds into is an important problem in computational structural biology. Lattice models have been utilized in numerous studies to model protein folds and predict the designability of certain folds. In this study, all possible compact conformations within a set of two-dimensional and 3D lattice spaces are explored. Complementary interaction graphs are then generated for each conformation and are described using a set of graph features. The full HP sequence space for each lattice model is generated and contact energies are calculated by threading each sequence onto all the possible conformations. Unique conformation giving minimum energy is identified for each sequence and the number of sequences folding to each conformation (designability) is obtained. Machine learning algorithms are used to predict the designability of each conformation. We find that the highly designable structures can be distinguished from other non-designable conformations based on certain graphical geometric features of the interactions. This finding confirms the fact that the topology of a conformation is an important determinant of the extent of its designability and suggests that the interactions themselves are important for determining the designability.
ERIC Educational Resources Information Center
Noell, George H.; Gresham, Frank M.
2001-01-01
Describes design logic and potential uses of a variant of the multiple-baseline design. The multiple-baseline multiple-sequence (MBL-MS) consists of multiple-baseline designs that are interlaced with one another and include all possible sequences of treatments. The MBL-MS design appears to be primarily useful for comparison of treatments taking…
Simplified microprocessor design for VLSI control applications
NASA Technical Reports Server (NTRS)
Cameron, K.
1991-01-01
A design technique for microprocessors combining the simplicity of reduced instruction set computers (RISC's) with the richer instruction sets of complex instruction set computers (CISC's) is presented. They utilize the pipelined instruction decode and datapaths common to RISC's. Instruction invariant data processing sequences which transparently support complex addressing modes permit the formulation of simple control circuitry. Compact implementations are possible since neither complicated controllers nor large register sets are required.
Modality dependence and intermodal transfer in the Corsi Spatial Sequence Task: Screen vs. Floor.
Röser, Andrea; Hardiess, Gregor; Mallot, Hanspeter A
2016-07-01
Four versions of the Corsi Spatial Sequence Task (CSST) were tested in a complete within-subject design, investigating whether participants' performance depends on the modality of task presentation and reproduction that put different demands on spatial processing. Presentation of the sequence (encoding phase) and the reproduction (recall phase) were each carried out either on a computer screen or on the floor of a room, involving actual walking in the recall phase. Combinations of the two different encoding and recall procedures result in the modality conditions Screen-Screen, Screen-Floor, Floor-Screen, and Floor-Floor. Results show the expected decrease in performance with increasing sequence length, which is likely due to processing limitations of working memory. We also found differences in performance between the modality conditions indicating different involvements of spatial working memory processes. Participants performed best in the Screen-Screen modality condition. Floor-Screen and Floor-Floor modality conditions require additional working memory resources for reference frame transformation and spatial updating, respectively; the resulting impairment of the performance was about the same in these two conditions. Finally, the Screen-Floor modality condition requires both types of additional spatial demands and led to the poorest performance. Therefore, we suggest that besides the well-known spatial requirements of CSST, additional working memory resources are demanded in walking CSST supporting processes such as spatial updating, mental rotation, reference frame transformation, and the control of walking itself.
Prediction of RNA secondary structures: from theory to models and real molecules
NASA Astrophysics Data System (ADS)
Schuster, Peter
2006-05-01
RNA secondary structures are derived from RNA sequences, which are strings built form the natural four letter nucleotide alphabet, {AUGC}. These coarse-grained structures, in turn, are tantamount to constrained strings over a three letter alphabet. Hence, the secondary structures are discrete objects and the number of sequences always exceeds the number of structures. The sequences built from two letter alphabets form perfect structures when the nucleotides can form a base pair, as is the case with {GC} or {AU}, but the relation between the sequences and structures differs strongly from the four letter alphabet. A comprehensive theory of RNA structure is presented, which is based on the concepts of sequence space and shape space, being a space of structures. It sets the stage for modelling processes in ensembles of RNA molecules like evolutionary optimization or kinetic folding as dynamical phenomena guided by mappings between the two spaces. The number of minimum free energy (mfe) structures is always smaller than the number of sequences, even for two letter alphabets. Folding of RNA molecules into mfe energy structures constitutes a non-invertible mapping from sequence space onto shape space. The preimage of a structure in sequence space is defined as its neutral network. Similarly the set of suboptimal structures is the preimage of a sequence in shape space. This set represents the conformation space of a given sequence. The evolutionary optimization of structures in populations is a process taking place in sequence space, whereas kinetic folding occurs in molecular ensembles that optimize free energy in conformation space. Efficient folding algorithms based on dynamic programming are available for the prediction of secondary structures for given sequences. The inverse problem, the computation of sequences for predefined structures, is an important tool for the design of RNA molecules with tailored properties. Simultaneous folding or cofolding of two or more RNA molecules can be modelled readily at the secondary structure level and allows prediction of the most stable (mfe) conformations of complexes together with suboptimal states. Cofolding algorithms are important tools for efficient and highly specific primer design in the polymerase chain reaction (PCR) and help to explain the mechanisms of small interference RNA (si-RNA) molecules in gene regulation. The evolutionary optimization of RNA structures is illustrated by the search for a target structure and mimics aptamer selection in evolutionary biotechnology. It occurs typically in steps consisting of short adaptive phases interrupted by long epochs of little or no obvious progress in optimization. During these quasi-stationary epochs the populations are essentially confined to neutral networks where they search for sequences that allow a continuation of the adaptive process. Modelling RNA evolution as a simultaneous process in sequence and shape space provides answers to questions of the optimal population size and mutation rates. Kinetic folding is a stochastic process in conformation space. Exact solutions are derived by direct simulation in the form of trajectory sampling or by solving the master equation. The exact solutions can be approximated straightforwardly by Arrhenius kinetics on barrier trees, which represent simplified versions of conformational energy landscapes. The existence of at least one sequence forming any arbitrarily chosen pair of structures is granted by the intersection theorem. Folding kinetics is the key to understanding and designing multistable RNA molecules or RNA switches. These RNAs form two or more long lived conformations, and conformational changes occur either spontaneously or are induced through binding of small molecules or other biopolymers. RNA switches are found in nature where they act as elements in genetic and metabolic regulation. The reliability of RNA secondary structure prediction is limited by the accuracy with which the empirical parameters can be determined and by principal deficiencies, for example by the lack of energy contributions resulting from tertiary interactions. In addition, native structures may be determined by folding kinetics rather than by thermodynamics. We address the first problem by considering base pair probabilities or base pairing entropies, which are derived from the partition function of conformations. A high base pair probability corresponding to a low pairing entropy is taken as an indicator of a high reliability of prediction. Pseudoknots are discussed as an example of a tertiary interaction that is highly important for RNA function. Moreover, pseudoknot formation is readily incorporated into structure prediction algorithms. Some examples of experimental data on RNA secondary structures that are readily explained using the landscape concept are presented. They deal with (i) properties of RNA molecules with random sequences, (ii) RNA molecules from restricted alphabets, (iii) existence of neutral networks, (iv) shape space covering, (v) riboswitches and (vi) evolution of non-coding RNAs as an example of evolution restricted to neutral networks.
Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.
Reinl, Maren; Bartels, Andreas
2014-11-15
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
2012-01-01
Background Microbial anaerobic digestion (AD) is used as a waste treatment process to degrade complex organic compounds into methane. The archaeal and bacterial taxa involved in AD are well known, whereas composition of the fungal community in the process has been less studied. The present study aimed to reveal the composition of archaeal, bacterial and fungal communities in response to increasing organic loading in mesophilic and thermophilic AD processes by applying 454 amplicon sequencing technology. Furthermore, a DNA microarray method was evaluated in order to develop a tool for monitoring the microbiological status of AD. Results The 454 sequencing showed that the diversity and number of bacterial taxa decreased with increasing organic load, while archaeal i.e. methanogenic taxa remained more constant. The number and diversity of fungal taxa increased during the process and varied less in composition with process temperature than bacterial and archaeal taxa, even though the fungal diversity increased with temperature as well. Evaluation of the microarray using AD sample DNA showed correlation of signal intensities with sequence read numbers of corresponding target groups. The sensitivity of the test was found to be about 1%. Conclusions The fungal community survives in anoxic conditions and grows with increasing organic loading, suggesting that Fungi may contribute to the digestion by metabolising organic nutrients for bacterial and methanogenic groups. The microarray proof of principle tests suggest that the method has the potential for semiquantitative detection of target microbial groups given that comprehensive sequence data is available for probe design. PMID:22727142
Improvements on a privacy-protection algorithm for DNA sequences with generalization lattices.
Li, Guang; Wang, Yadong; Su, Xiaohong
2012-10-01
When developing personal DNA databases, there must be an appropriate guarantee of anonymity, which means that the data cannot be related back to individuals. DNA lattice anonymization (DNALA) is a successful method for making personal DNA sequences anonymous. However, it uses time-consuming multiple sequence alignment and a low-accuracy greedy clustering algorithm. Furthermore, DNALA is not an online algorithm, and so it cannot quickly return results when the database is updated. This study improves the DNALA method. Specifically, we replaced the multiple sequence alignment in DNALA with global pairwise sequence alignment to save time, and we designed a hybrid clustering algorithm comprised of a maximum weight matching (MWM)-based algorithm and an online algorithm. The MWM-based algorithm is more accurate than the greedy algorithm in DNALA and has the same time complexity. The online algorithm can process data quickly when the database is updated. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Computationally mapping sequence space to understand evolutionary protein engineering.
Armstrong, Kathryn A; Tidor, Bruce
2008-01-01
Evolutionary protein engineering has been dramatically successful, producing a wide variety of new proteins with altered stability, binding affinity, and enzymatic activity. However, the success of such procedures is often unreliable, and the impact of the choice of protein, engineering goal, and evolutionary procedure is not well understood. We have created a framework for understanding aspects of the protein engineering process by computationally mapping regions of feasible sequence space for three small proteins using structure-based design protocols. We then tested the ability of different evolutionary search strategies to explore these sequence spaces. The results point to a non-intuitive relationship between the error-prone PCR mutation rate and the number of rounds of replication. The evolutionary relationships among feasible sequences reveal hub-like sequences that serve as particularly fruitful starting sequences for evolutionary search. Moreover, genetic recombination procedures were examined, and tradeoffs relating sequence diversity and search efficiency were identified. This framework allows us to consider the impact of protein structure on the allowed sequence space and therefore on the challenges that each protein presents to error-prone PCR and genetic recombination procedures.
Cell and module formation research area
NASA Technical Reports Server (NTRS)
Bickler, D. B.
1982-01-01
Metallization is discussed. The influence of hydrogen on the firing of base-metal pastes in reducing atmospheres is reported. A method for optimization of metallization patterns is presented. A process sequence involving an AR coating and thick-film metallization system capable of penetrating the AR coating during firing is reported. Design and construction of the NMA implantation machine is reported. Implanted back-surface fields and NMA primary (front) junctions are discussed. The use of glass beads, a wave-soldering device, and ion milling is reported. Processing through the module fabrication and environmental testing of its design are reported. Metallization patterns by mathematical optimization are assessed.
Single-cell transcriptome conservation in cryopreserved cells and tissues.
Guillaumet-Adkins, Amy; Rodríguez-Esteban, Gustavo; Mereu, Elisabetta; Mendez-Lago, Maria; Jaitin, Diego A; Villanueva, Alberto; Vidal, August; Martinez-Marti, Alex; Felip, Enriqueta; Vivancos, Ana; Keren-Shaul, Hadas; Heath, Simon; Gut, Marta; Amit, Ido; Gut, Ivo; Heyn, Holger
2017-03-01
A variety of single-cell RNA preparation procedures have been described. So far, protocols require fresh material, which hinders complex study designs. We describe a sample preservation method that maintains transcripts in viable single cells, allowing one to disconnect time and place of sampling from subsequent processing steps. We sequence single-cell transcriptomes from >1000 fresh and cryopreserved cells using 3'-end and full-length RNA preparation methods. Our results confirm that the conservation process did not alter transcriptional profiles. This substantially broadens the scope of applications in single-cell transcriptomics and could lead to a paradigm shift in future study designs.
Cohn, Neil; Kutas, Marta
2015-01-01
Inference has long been emphasized in the comprehension of verbal and visual narratives. Here, we measured event-related brain potentials to visual sequences designed to elicit inferential processing. In Impoverished sequences, an expressionless “onlooker” watches an undepicted event (e.g., person throws a ball for a dog, then watches the dog chase it) just prior to a surprising finale (e.g., someone else returns the ball), which should lead to an inference (i.e., the different person retrieved the ball). Implied sequences alter this narrative structure by adding visual cues to the critical panel such as a surprised facial expression to the onlooker implying they saw an unexpected, albeit undepicted, event. In contrast, Expected sequences show a predictable, but then confounded, event (i.e., dog retrieves ball, then different person returns it), and Explicit sequences depict the unexpected event (i.e., different person retrieves then returns ball). At the critical penultimate panel, sequences representing depicted events (Explicit, Expected) elicited a larger posterior positivity (P600) than the relatively passive events of an onlooker (Impoverished, Implied), though Implied sequences were slightly more positive than Impoverished sequences. At the subsequent and final panel, a posterior positivity (P600) was greater to images in Impoverished sequences than those in Explicit and Implied sequences, which did not differ. In addition, both sequence types requiring inference (Implied, Impoverished) elicited a larger frontal negativity than those explicitly depicting events (Expected, Explicit). These results show that neural processing differs for visual narratives omitting events versus those depicting events, and that the presence of subtle visual cues can modulate such effects presumably by altering narrative structure. PMID:26320706
Timeliner: Automating Procedures on the ISS
NASA Technical Reports Server (NTRS)
Brown, Robert; Braunstein, E.; Brunet, Rick; Grace, R.; Vu, T.; Zimpfer, Doug; Dwyer, William K.; Robinson, Emily
2002-01-01
Timeliner has been developed as a tool to automate procedural tasks. These tasks may be sequential tasks that would typically be performed by a human operator, or precisely ordered sequencing tasks that allow autonomous execution of a control process. The Timeliner system includes elements for compiling and executing sequences that are defined in the Timeliner language. The Timeliner language was specifically designed to allow easy definition of scripts that provide sequencing and control of complex systems. The execution environment provides real-time monitoring and control based on the commands and conditions defined in the Timeliner language. The Timeliner sequence control may be preprogrammed, compiled from Timeliner "scripts," or it may consist of real-time, interactive inputs from system operators. In general, the Timeliner system lowers the workload for mission or process control operations. In a mission environment, scripts can be used to automate spacecraft operations including autonomous or interactive vehicle control, performance of preflight and post-flight subsystem checkouts, or handling of failure detection and recovery. Timeliner may also be used for mission payload operations, such as stepping through pre-defined procedures of a scientific experiment.
Bidard, J N; de Nadai, F; Rovere, C; Moinier, D; Laur, J; Martinez, J; Cuber, J C; Kitabgi, P
1993-01-01
Neurotensin (NT) and neuromedin N (NN) are two related biologically active peptides that are encoded in the same precursor molecule. In the rat, the precursor consists of a 169-residue polypeptide starting with an N-terminal signal peptide and containing in its C-terminal region one copy each of NT and NN. NN precedes NT and is separated from it by a Lys-Arg sequence. Two other Lys-Arg sequences flank the N-terminus of NN and the C-terminus of NT. A fourth Lys-Arg sequence occurs near the middle of the precursor and is followed by an NN-like sequence. Finally, an Arg-Arg pair is present within the NT moiety. The four Lys-Arg doublets represent putative processing sites in the precursor molecule. The present study was designed to investigate the post-translational processing of the NT/NN precursor in the rat medullary thyroid carcinoma (rMTC) 6-23 cell line, which synthesizes large amounts of NT upon dexamethasone treatment. Five region-specific antisera recognizing the free N- or C-termini of sequences adjacent to the basic doublets were produced, characterized and used for immunoblotting and radioimmunoassay studies in combination with gel filtration, reverse-phase h.p.l.c. and trypsin digestion of rMTC 6-23 cell extracts. Because two of the antigenic sequences, i.e. NN and the NN-like sequence, start with a lysine residue that is essential for recognition by their respective antisera, a micromethod by which trypsin specifically cleaves at arginine residues was developed. The results show that dexamethasone-treated rMTC 6-23 cells produced comparable amounts of NT, NN and a peptide corresponding to a large N-terminal precursor fragment lacking the NN and NT moieties. This large fragment was purified. N-Terminal sequencing revealed that it started at residue Ser23 of the prepro-NT/NN sequence, and thus established the Cys22-Ser23 bond as the cleavage site of the signal peptide. Two other large N-terminal fragments bearing respectively the NN and NT sequences at their C-termini were present in lower amounts. The NN-like sequence was internal to all the large fragments. There was no evidence for the presence of peptides with the NN-like sequence at their N-termini. This shows that, in rMTC 6-23 cells, the precursor is readily processed at the three Lys-Arg doublets that flank and separate the NT and NN sequences. In contrast, the Lys-Arg doublet that precedes the NN-like sequence is not processed in this system.(ABSTRACT TRUNCATED AT 400 WORDS) Images Figure 3 PMID:8471039
Retrofitting activated sludge systems to intermittent aeration for nitrogen removal.
Hanhan, O; Artan, N; Orhon, D
2002-01-01
The paper provides the basis and the conceptual approach of applying process kinetics and modelling to the design of alternating activated sludge systems for retrofitting existing activated sludge plants to intermittent aeration for nitrogen removal. It shows the significant role of the two specific parameters, namely, the aerated fraction and the cycle time ratio on process performance through model simulations and proposes a way to incorporate them into a design procedure using process stoichiometry and mass balance. It illustrates the effect of these parameters, together with the sludge age, in establishing the balance between the denitrification potential and the available nitrogen created in the anoxic/aerobic sequences of system operation.
Makowsky, Robert; Cox, Christian L; Roelke, Corey; Chippindale, Paul T
2010-11-01
Determining the appropriate gene for phylogeny reconstruction can be a difficult process. Rapidly evolving genes tend to resolve recent relationships, but suffer from alignment issues and increased homoplasy among distantly related species. Conversely, slowly evolving genes generally perform best for deeper relationships, but lack sufficient variation to resolve recent relationships. We determine the relationship between sequence divergence and Bayesian phylogenetic reconstruction ability using both natural and simulated datasets. The natural data are based on 28 well-supported relationships within the subphylum Vertebrata. Sequences of 12 genes were acquired and Bayesian analyses were used to determine phylogenetic support for correct relationships. Simulated datasets were designed to determine whether an optimal range of sequence divergence exists across extreme phylogenetic conditions. Across all genes we found that an optimal range of divergence for resolving the correct relationships does exist, although this level of divergence expectedly depends on the distance metric. Simulated datasets show that an optimal range of sequence divergence exists across diverse topologies and models of evolution. We determine that a simple to measure property of genetic sequences (genetic distance) is related to phylogenic reconstruction ability in Bayesian analyses. This information should be useful for selecting the most informative gene to resolve any relationships, especially those that are difficult to resolve, as well as minimizing both cost and confounding information during project design. Copyright © 2010. Published by Elsevier Inc.
Mission Management Computer and Sequencing Hardware for RLV-TD HEX-01 Mission
NASA Astrophysics Data System (ADS)
Gupta, Sukrat; Raj, Remya; Mathew, Asha Mary; Koshy, Anna Priya; Paramasivam, R.; Mookiah, T.
2017-12-01
Reusable Launch Vehicle-Technology Demonstrator Hypersonic Experiment (RLV-TD HEX-01) mission posed some unique challenges in the design and development of avionics hardware. This work presents the details of mission critical avionics hardware mainly Mission Management Computer (MMC) and sequencing hardware. The Navigation, Guidance and Control (NGC) chain for RLV-TD is dual redundant with cross-strapped Remote Terminals (RTs) interfaced through MIL-STD-1553B bus. MMC is Bus Controller on the 1553 bus, which does the function of GPS aided navigation, guidance, digital autopilot and sequencing for the RLV-TD launch vehicle in different periodicities (10, 20, 500 ms). Digital autopilot execution in MMC with a periodicity of 10 ms (in ascent phase) is introduced for the first time and successfully demonstrated in the flight. MMC is built around Intel i960 processor and has inbuilt fault tolerance features like ECC for memories. Fault Detection and Isolation schemes are implemented to isolate the failed MMC. The sequencing hardware comprises Stage Processing System (SPS) and Command Execution Module (CEM). SPS is `RT' on the 1553 bus which receives the sequencing and control related commands from MMCs and posts to downstream modules after proper error handling for final execution. SPS is designed as a high reliability system by incorporating various fault tolerance and fault detection features. CEM is a relay based module for sequence command execution.
Multidisciplinary systems optimization by linear decomposition
NASA Technical Reports Server (NTRS)
Sobieski, J.
1984-01-01
In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.
Development of Neuromorphic Sift Operator with Application to High Speed Image Matching
NASA Astrophysics Data System (ADS)
Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.
2015-12-01
There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.
Venkata Mohan, S; Chandrasekhara Rao, N; Krishna Prasad, K; Murali Krishna, P; Sreenivas Rao, R; Sarma, P N
2005-06-20
The Taguchi robust experimental design (DOE) methodology has been applied on a dynamic anaerobic process treating complex wastewater by an anaerobic sequencing batch biofilm reactor (AnSBBR). For optimizing the process as well as to evaluate the influence of different factors on the process, the uncontrollable (noise) factors have been considered. The Taguchi methodology adopting dynamic approach is the first of its kind for studying anaerobic process evaluation and process optimization. The designed experimental methodology consisted of four phases--planning, conducting, analysis, and validation connected sequence-wise to achieve the overall optimization. In the experimental design, five controllable factors, i.e., organic loading rate (OLR), inlet pH, biodegradability (BOD/COD ratio), temperature, and sulfate concentration, along with the two uncontrollable (noise) factors, volatile fatty acids (VFA) and alkalinity at two levels were considered for optimization of the anae robic system. Thirty-two anaerobic experiments were conducted with a different combination of factors and the results obtained in terms of substrate degradation rates were processed in Qualitek-4 software to study the main effect of individual factors, interaction between the individual factors, and signal-to-noise (S/N) ratio analysis. Attempts were also made to achieve optimum conditions. Studies on the influence of individual factors on process performance revealed the intensive effect of OLR. In multiple factor interaction studies, biodegradability with other factors, such as temperature, pH, and sulfate have shown maximum influence over the process performance. The optimum conditions for the efficient performance of the anaerobic system in treating complex wastewater by considering dynamic (noise) factors obtained are higher organic loading rate of 3.5 Kg COD/m3 day, neutral pH with high biodegradability (BOD/COD ratio of 0.5), along with mesophilic temperature range (40 degrees C), and low sulfate concentration (700 mg/L). The optimization resulted in enhanced anaerobic performance (56.7%) from a substrate degradation rate (SDR) of 1.99 to 3.13 Kg COD/m3 day. Considering the obtained optimum factors, further validation experiments were carried out, which showed enhanced process performance (3.04 Kg COD/m3-day from 1.99 Kg COD/m3 day) accounting for 52.13% improvement with the optimized process conditions. The proposed method facilitated a systematic mathematical approach to understand the complex multi-species manifested anaerobic process treating complex chemical wastewater by considering the uncontrollable factors. Copyright (c) 2005 Wiley Periodicals, Inc.
Uprobe: a genome-wide universal probe resource for comparative physical mapping in vertebrates.
Kellner, Wendy A; Sullivan, Robert T; Carlson, Brian H; Thomas, James W
2005-01-01
Interspecies comparisons are important for deciphering the functional content and evolution of genomes. The expansive array of >70 public vertebrate genomic bacterial artificial chromosome (BAC) libraries can provide a means of comparative mapping, sequencing, and functional analysis of targeted chromosomal segments that is independent and complementary to whole-genome sequencing. However, at the present time, no complementary resource exists for the efficient targeted physical mapping of the majority of these BAC libraries. Universal overgo-hybridization probes, designed from regions of sequenced genomes that are highly conserved between species, have been demonstrated to be an effective resource for the isolation of orthologous regions from multiple BAC libraries in parallel. Here we report the application of the universal probe design principal across entire genomes, and the subsequent creation of a complementary probe resource, Uprobe, for screening vertebrate BAC libraries. Uprobe currently consists of whole-genome sets of universal overgo-hybridization probes designed for screening mammalian or avian/reptilian libraries. Retrospective analysis, experimental validation of the probe design process on a panel of representative BAC libraries, and estimates of probe coverage across the genome indicate that the majority of all eutherian and avian/reptilian genes or regions of interest can be isolated using Uprobe. Future implementation of the universal probe design strategy will be used to create an expanded number of whole-genome probe sets that will encompass all vertebrate genomes.
RNAblueprint: flexible multiple target nucleic acid sequence design.
Hammer, Stefan; Tschiatschek, Birgit; Flamm, Christoph; Hofacker, Ivo L; Findeiß, Sven
2017-09-15
Realizing the value of synthetic biology in biotechnology and medicine requires the design of molecules with specialized functions. Due to its close structure to function relationship, and the availability of good structure prediction methods and energy models, RNA is perfectly suited to be synthetically engineered with predefined properties. However, currently available RNA design tools cannot be easily adapted to accommodate new design specifications. Furthermore, complicated sampling and optimization methods are often developed to suit a specific RNA design goal, adding to their inflexibility. We developed a C ++ library implementing a graph coloring approach to stochastically sample sequences compatible with structural and sequence constraints from the typically very large solution space. The approach allows to specify and explore the solution space in a well defined way. Our library also guarantees uniform sampling, which makes optimization runs performant by not only avoiding re-evaluation of already found solutions, but also by raising the probability of finding better solutions for long optimization runs. We show that our software can be combined with any other software package to allow diverse RNA design applications. Scripting interfaces allow the easy adaption of existing code to accommodate new scenarios, making the whole design process very flexible. We implemented example design approaches written in Python to demonstrate these advantages. RNAblueprint , Python implementations and benchmark datasets are available at github: https://github.com/ViennaRNA . s.hammer@univie.ac.at, ivo@tbi.univie.ac.at or sven@tbi.univie.ac.at. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
32-channel single photon counting module for ultrasensitive detection of DNA sequences
NASA Astrophysics Data System (ADS)
Gudkov, Georgiy; Dhulla, Vinit; Borodin, Anatoly; Gavrilov, Dmitri; Stepukhovich, Andrey; Tsupryk, Andrey; Gorbovitski, Boris; Gorfinkel, Vera
2006-10-01
We continue our work on the design and implementation of multi-channel single photon detection systems for highly sensitive detection of ultra-weak fluorescence signals, for high-performance, multi-lane DNA sequencing instruments. A fiberized, 32-channel single photon detection (SPD) module based on single photon avalanche diode (SPAD), model C30902S-DTC, from Perkin Elmer Optoelectronics (PKI) has been designed and implemented. Unavailability of high performance, large area SPAD arrays and our desire to design high performance photon counting systems drives us to use individual diodes. Slight modifications in our quenching circuit has doubled the linear range of our system from 1MHz to 2MHz, which is the upper limit for these devices and the maximum saturation count rate has increased to 14 MHz. The detector module comprises of a single board computer PC-104 that enables data visualization, recording, processing, and transfer. Very low dark count (300-1000 counts/s), robust, efficient, simple data collection and processing, ease of connectivity to any other application demanding similar requirements and similar performance results to the best commercially available single photon counting module (SPCM from PKI) are some of the features of this system.
Identification of sequence-structure RNA binding motifs for SELEX-derived aptamers.
Hoinka, Jan; Zotenko, Elena; Friedman, Adam; Sauna, Zuben E; Przytycka, Teresa M
2012-06-15
Systematic Evolution of Ligands by EXponential Enrichment (SELEX) represents a state-of-the-art technology to isolate single-stranded (ribo)nucleic acid fragments, named aptamers, which bind to a molecule (or molecules) of interest via specific structural regions induced by their sequence-dependent fold. This powerful method has applications in designing protein inhibitors, molecular detection systems, therapeutic drugs and antibody replacement among others. However, full understanding and consequently optimal utilization of the process has lagged behind its wide application due to the lack of dedicated computational approaches. At the same time, the combination of SELEX with novel sequencing technologies is beginning to provide the data that will allow the examination of a variety of properties of the selection process. To close this gap we developed, Aptamotif, a computational method for the identification of sequence-structure motifs in SELEX-derived aptamers. To increase the chances of identifying functional motifs, Aptamotif uses an ensemble-based approach. We validated the method using two published aptamer datasets containing experimentally determined motifs of increasing complexity. We were able to recreate the author's findings to a high degree, thus proving the capability of our approach to identify binding motifs in SELEX data. Additionally, using our new experimental dataset, we illustrate the application of Aptamotif to elucidate several properties of the selection process.
Scherer, N M; Basso, D M
2008-09-16
DNATagger is a web-based tool for coloring and editing DNA, RNA and protein sequences and alignments. It is dedicated to the visualization of protein coding sequences and also protein sequence alignments to facilitate the comprehension of evolutionary processes in sequence analysis. The distinctive feature of DNATagger is the use of codons as informative units for coloring DNA and RNA sequences. The codons are colored according to their corresponding amino acids. It is the first program that colors codons in DNA sequences without being affected by "out-of-frame" gaps of alignments. It can handle single gaps and gaps inside the triplets. The program also provides the possibility to edit the alignments and change color patterns and translation tables. DNATagger is a JavaScript application, following the W3C guidelines, designed to work on standards-compliant web browsers. It therefore requires no installation and is platform independent. The web-based DNATagger is available as free and open source software at http://www.inf.ufrgs.br/~dmbasso/dnatagger/.
Laboratory Activities for Developing Process Skills.
ERIC Educational Resources Information Center
Institute for Services to Education, Inc., Washington, DC.
This workbook contains laboratory exercises designed for use in a college introductory biology course. Each exercise helps the student develop a basic science skill. The exercises are arranged in a hierarchical sequence suggesting the scientific method. Each skill facilitates the development of succeeding ones. Activities include Use of the…
Botti, Sara; Giuffra, Elisabetta
2010-08-23
DNA barcodes are a global standard for species identification and have countless applications in the medical, forensic and alimentary fields, but few barcoding methods work efficiently in samples in which DNA is degraded, e.g. foods and archival specimens. This limits the choice of target regions harbouring a sufficient number of diagnostic polymorphisms. The method described here uses existing PCR and sequencing methodologies to detect mitochondrial DNA polymorphisms in complex matrices such as foods. The reported application allowed the discrimination among 17 fish species of the Scombridae family with high commercial interest such as mackerels, bonitos and tunas which are often present in processed seafood. The approach can be easily upgraded with the release of new genetic diversity information to increase the range of detected species. Cocktail of primers are designed for PCR using publicly available sequences of the target sequence. They are composed of a fixed 5' region and of variable 3' cocktail portions that allow amplification of any member of a group of species of interest. The population of short amplicons is directly sequenced and indexed using primers containing a longer 5' region and the non polymorphic portion of the cocktail portion. A 226 bp region of CytB was selected as target after collection and screening of 148 online sequences; 85 SNPs were found, of which 75 were present in at least two sequences. Primers were also designed for two shorter sub-fragments that could be amplified from highly degraded samples. The test was used on 103 samples of seafood (canned tuna and scomber, tuna salad, tuna sauce) and could successfully detect the presence of different or additional species that were not identified on the labelling of canned tuna, tuna salad and sauce samples. The described method is largely independent of the degree of degradation of DNA source and can thus be applied to processed seafood. Moreover, the method is highly flexible: publicly available sequence information on mitochondrial genomes are rapidly increasing for most species, facilitating the choice of target sequences and the improvement of resolution of the test. This is particularly important for discrimination of marine and aquaculture species for which genome information is still limited.
Programming mRNA decay to modulate synthetic circuit resource allocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venturelli, Ophelia S.; Tei, Mika; Bauer, Stefan
Synthetic circuits embedded in host cells compete with cellular processes for limited intracellular resources. Here we show how funnelling of cellular resources, after global transcriptome degradation by the sequence-dependent endoribonuclease MazF, to a synthetic circuit can increase production. Target genes are protected from MazF activity by recoding the gene sequence to eliminate recognition sites, while preserving the amino acid sequence. The expression of a protected fluorescent reporter and flux of a high-value metabolite are significantly enhanced using this genome-scale control strategy. Proteomics measurements discover a host factor in need of protection to improve resource redistribution activity. A computational model demonstratesmore » that the MazF mRNA-decay feedback loop enables proportional control of MazF in an optimal operating regime. Transcriptional profiling of MazF-induced cells elucidates the dynamic shifts in transcript abundance and discovers regulatory design elements. Altogether, our results suggest that manipulation of cellular resource allocation is a key control parameter for synthetic circuit design.« less
Programming mRNA decay to modulate synthetic circuit resource allocation
Venturelli, Ophelia S.; Tei, Mika; Bauer, Stefan; ...
2017-04-26
Synthetic circuits embedded in host cells compete with cellular processes for limited intracellular resources. Here we show how funnelling of cellular resources, after global transcriptome degradation by the sequence-dependent endoribonuclease MazF, to a synthetic circuit can increase production. Target genes are protected from MazF activity by recoding the gene sequence to eliminate recognition sites, while preserving the amino acid sequence. The expression of a protected fluorescent reporter and flux of a high-value metabolite are significantly enhanced using this genome-scale control strategy. Proteomics measurements discover a host factor in need of protection to improve resource redistribution activity. A computational model demonstratesmore » that the MazF mRNA-decay feedback loop enables proportional control of MazF in an optimal operating regime. Transcriptional profiling of MazF-induced cells elucidates the dynamic shifts in transcript abundance and discovers regulatory design elements. Altogether, our results suggest that manipulation of cellular resource allocation is a key control parameter for synthetic circuit design.« less
Del Medico, Luca; Christen, Heinz; Christen, Beat
2017-01-01
Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner. PMID:28531174
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities
2011-01-01
Background Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. Results The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. Conclusions PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/. PMID:21385349
Process Research ON Semix Silicon Materials (PROSSM)
NASA Astrophysics Data System (ADS)
Wohlgemuth, J. H.; Warfield, D. B.
1982-02-01
A cost effective process sequence was identified, equipment was designed to implement a 6.6 MW per year automated production line, and a cost analysis projected a $0.56 per watt cell add-on cost for this line. Four process steps were developed for this program: glass beads back clean-up, hot spray antireflective coating, wave soldering of fronts, and ion milling for edging. While spray dopants were advertised as an off the shelf developed product, they were unreliable with shorter than advertised shelf life.
Process Research ON Semix Silicon Materials (PROSSM)
NASA Technical Reports Server (NTRS)
Wohlgemuth, J. H.; Warfield, D. B.
1982-01-01
A cost effective process sequence was identified, equipment was designed to implement a 6.6 MW per year automated production line, and a cost analysis projected a $0.56 per watt cell add-on cost for this line. Four process steps were developed for this program: glass beads back clean-up, hot spray antireflective coating, wave soldering of fronts, and ion milling for edging. While spray dopants were advertised as an off the shelf developed product, they were unreliable with shorter than advertised shelf life.
Memory and learning with rapid audiovisual sequences
Keller, Arielle S.; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193
Memory and learning with rapid audiovisual sequences.
Keller, Arielle S; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.
Singh, Swati; Gupta, Sanchita; Mani, Ashutosh; Chaturvedi, Anoop
2012-01-01
Humulus lupulus is commonly known as hops, a member of the family moraceae. Currently many projects are underway leading to the accumulation of voluminous genomic and expressed sequence tag sequences in public databases. The genetically characterized domains in these databases are limited due to non-availability of reliable molecular markers. The large data of EST sequences are available in hops. The simple sequence repeat markers extracted from EST data are used as molecular markers for genetic characterization, in the present study. 25,495 EST sequences were examined and assembled to get full-length sequences. Maximum frequency distribution was shown by mononucleotide SSR motifs i.e. 60.44% in contig and 62.16% in singleton where as minimum frequency are observed for hexanucleotide SSR in contig (0.09%) and pentanucleotide SSR in singletons (0.12%). Maximum trinucleotide motifs code for Glutamic acid (GAA) while AT/TA were the most frequent repeat of dinucleotide SSRs. Flanking primer pairs were designed in-silico for the SSR containing sequences. Functional categorization of SSRs containing sequences was done through gene ontology terms like biological process, cellular component and molecular function. PMID:22368382
Technical Considerations for Reduced Representation Bisulfite Sequencing with Multiplexed Libraries
Chatterjee, Aniruddha; Rodger, Euan J.; Stockwell, Peter A.; Weeks, Robert J.; Morison, Ian M.
2012-01-01
Reduced representation bisulfite sequencing (RRBS), which couples bisulfite conversion and next generation sequencing, is an innovative method that specifically enriches genomic regions with a high density of potential methylation sites and enables investigation of DNA methylation at single-nucleotide resolution. Recent advances in the Illumina DNA sample preparation protocol and sequencing technology have vastly improved sequencing throughput capacity. Although the new Illumina technology is now widely used, the unique challenges associated with multiplexed RRBS libraries on this platform have not been previously described. We have made modifications to the RRBS library preparation protocol to sequence multiplexed libraries on a single flow cell lane of the Illumina HiSeq 2000. Furthermore, our analysis incorporates a bioinformatics pipeline specifically designed to process bisulfite-converted sequencing reads and evaluate the output and quality of the sequencing data generated from the multiplexed libraries. We obtained an average of 42 million paired-end reads per sample for each flow-cell lane, with a high unique mapping efficiency to the reference human genome. Here we provide a roadmap of modifications, strategies, and trouble shooting approaches we implemented to optimize sequencing of multiplexed libraries on an a RRBS background. PMID:23193365
An Intelligent Automation Platform for Rapid Bioprocess Design.
Wu, Tianyi; Zhou, Yuhong
2014-08-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.
An Intelligent Automation Platform for Rapid Bioprocess Design
Wu, Tianyi
2014-01-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579
Automated Design of the Europa Orbiter Tour
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Strange, Nathan J.; Longusaki, James M.; Bonfiglio, Eugene P.
2000-01-01
In this paper we investigate tours of the Jovian satellites Europa, Ganymede, and Callisto for the Europa Orbiter Mission. The principal goal of the tour design is to lower arrival V(sub infinity) for the final Europa encounter while meeting all of the design constraints. Key constraints arise from considering the total time of the tour and the radiation dosage of a tour. These tours may employ 14 or more encounters with the Jovian satellites, hence there is an enormous number of possible sequences of these satellites to investigate. We develop a graphical method that greatly aids the design process.
Automated Design of the Europa Orbiter Tour
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Strange, Nathan J.; Longuski, James M.; Bonfiglio, Eugene P.; Taylor, Irene (Technical Monitor)
2000-01-01
In this paper we investigate tours of the Jovian satellites Europa Ganymede, and Callisto for the Europa Orbiter Mission. The principal goal of the tour design is to lower arrival V_ for the final Europa encounter while meeting all of the design constraints. Key constraints arise from considering the total time of the tour and the radiation dosage of a tour. These tours may employ 14 or more encounters with the Jovian satellites. hence there is an enormous number of possible sequences of these satellites to investigate. We develop a graphical method that greatly aids the design process.
DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Rogers, James L.
1996-01-01
Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.
A Proposal for a K-12 Sequence of Environmental Education Competencies.
ERIC Educational Resources Information Center
Culbert, Jack; And Others
Presented is an overview and model of the proposed curriculum development process in environmental education in Connecticut. Concepts and competencies are identified at each grade level and are designed to facilitate the infusion of environmental education activities within the existing curricula using existing learning resources such as…
The UMR Conception Cycle of Vocational School Students in Solving Linear Equation
ERIC Educational Resources Information Center
Li, Shao-Ying; Leon, Shian
2013-01-01
The authors designed instruments from theories and literatures. Data were collected throughout remedial teaching processes and interviewed with vocational school students. By SOLO (structure of the observed learning outcome) taxonomy, the authors made the UMR (unistructural-multistructural-relational sequence) conception cycle of the formative and…
Deciphering mRNA Sequence Determinants of Protein Production Rate
NASA Astrophysics Data System (ADS)
Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen
2018-03-01
One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.
Incorporating DSA in multipatterning semiconductor manufacturing technologies
NASA Astrophysics Data System (ADS)
Badr, Yasmine; Torres, J. A.; Ma, Yuansheng; Mitra, Joydeep; Gupta, Puneet
2015-03-01
Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
Frequency tagging to track the neural processing of contrast in fast, continuous sound sequences.
Nozaradan, Sylvie; Mouraux, André; Cousineau, Marion
2017-07-01
The human auditory system presents a remarkable ability to detect rapid changes in fast, continuous acoustic sequences, as best illustrated in speech and music. However, the neural processing of rapid auditory contrast remains largely unclear, probably due to the lack of methods to objectively dissociate the response components specifically related to the contrast from the other components in response to the sequence of fast continuous sounds. To overcome this issue, we tested a novel use of the frequency-tagging approach allowing contrast-specific neural responses to be tracked based on their expected frequencies. The EEG was recorded while participants listened to 40-s sequences of sounds presented at 8Hz. A tone or interaural time contrast was embedded every fifth sound (AAAAB), such that a response observed in the EEG at exactly 8 Hz/5 (1.6 Hz) or harmonics should be the signature of contrast processing by neural populations. Contrast-related responses were successfully identified, even in the case of very fine contrasts. Moreover, analysis of the time course of the responses revealed a stable amplitude over repetitions of the AAAAB patterns in the sequence, except for the response to perceptually salient contrasts that showed a buildup and decay across repetitions of the sounds. Overall, this new combination of frequency-tagging with an oddball design provides a valuable complement to the classic, transient, evoked potentials approach, especially in the context of rapid auditory information. Specifically, we provide objective evidence on the neural processing of contrast embedded in fast, continuous sound sequences. NEW & NOTEWORTHY Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia might be an impaired processing of fast auditory changes, highlighting how the encoding of rapid acoustic information is critical for auditory communication. Here, we present a novel electrophysiological approach to capture in humans neural markers of contrasts in fast continuous tone sequences. Contrast-specific responses were successfully identified, even for very fine contrasts, providing direct insight on the encoding of rapid auditory information. Copyright © 2017 the American Physiological Society.
Microscopic models for bridging electrostatics and currents
NASA Astrophysics Data System (ADS)
Borghi, L.; DeAmbrosis, A.; Mascheretti, P.
2007-03-01
A teaching sequence based on the use of microscopic models to link electrostatic phenomena with direct currents is presented. The sequence, devised for high school students, was designed after initial work carried out with student teachers attending a school of specialization for teaching physics at high school, at the University of Pavia. The results obtained with them are briefly presented, because they directed our steps for the development of the teaching sequence. For both the design of the experiments and their interpretation, we drew inspiration from the original works of Alessandro Volta; in addition, a structural model based on the particular role of electrons as elementary charges both in electrostatic phenomena and in currents was proposed. The teaching sequence starts from experiments on charging objects by rubbing and by induction, and engages students in constructing microscopic models to interpret their observations. By using these models and by closely examining the ideas of tension and capacitance, the students acknowledge that a charging (or discharging) process is due to the motion of electrons that, albeit for short time intervals, represent a current. Finally, they are made to see that the same happens in transients of direct current circuits.
RBT-GA: a novel metaheuristic for solving the Multiple Sequence Alignment problem.
Taheri, Javid; Zomaya, Albert Y
2009-07-07
Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences.
Whole exome or genome sequencing: nurses need to prepare families for the possibilities.
Prows, Cynthia A; Tran, Grace; Blosser, Beverly
2014-12-01
A discussion of whole exome sequencing and the type of possible results patients and families should be aware of before samples are obtained. To find the genetic cause of a rare disorder, whole exome sequencing analyses all known and suspected human genes from a single sample. Over 20,000 detected DNA variants in each individual exome must be considered as possibly causing disease or disregarded as not relevant to the person's disease. In the process, unexpected gene variants associated with known diseases unrelated to the primary purpose of the test may be incidentally discovered. Because family members' DNA samples are often needed, gene variants associated with known genetic diseases or predispositions for diseases can also be discovered in their samples. Discussion paper. PubMed 2009-2013, list of references in retrieved articles, Google Scholar. Nurses need a general understanding of the scope of potential genomic information that may be revealed with whole exome sequencing to provide support and guidance to individuals and families during their decision-making process, while waiting for results and after disclosure. Nurse scientists who want to use whole exome sequencing in their study design and methods must decide early in study development if they will return primary whole exome sequencing research results and if they will give research participants choices about learning incidental research results. It is critical that nurses translate their knowledge about whole exome sequencing into their patient education and patient advocacy roles and relevant programmes of research. © 2014 John Wiley & Sons Ltd.
De Novo Enzyme Design Using Rosetta3
Richter, Florian; Leaver-Fay, Andrew; Khare, Sagar D.; Bjelic, Sinisa; Baker, David
2011-01-01
The Rosetta de novo enzyme design protocol has been used to design enzyme catalysts for a variety of chemical reactions, and in principle can be applied to any arbitrary chemical reaction of interest, The process has four stages: 1) choice of a catalytic mechanism and corresponding minimal model active site, 2) identification of sites in a set of scaffold proteins where this minimal active site can be realized, 3) optimization of the identities of the surrounding residues for stabilizing interactions with the transition state and primary catalytic residues, and 4) evaluation and ranking the resulting designed sequences. Stages two through four of this process can be carried out with the Rosetta package, while stage one needs to be done externally. Here, we demonstrate how to carry out the Rosetta enzyme design protocol from start to end in detail using for illustration the triosephosphate isomerase reaction. PMID:21603656
A process for prototyping onboard payload displays for Space Station Freedom
NASA Technical Reports Server (NTRS)
Moore, Loretta A.
1992-01-01
Significant advances have been made in the area of Human-Computer Interface design. However, there is no well-defined process for going from user interface requirements to user interface design. Developing and designing a clear and consistent user interface for medium to large scale systems is a very challenging and complex task. The task becomes increasingly difficult when there is very little guidance and procedures on how the development process should flow from one stage to the next. Without a specific sequence of development steps each design becomes difficult to repeat, to evaluate, to improve, and to articulate to others. This research contributes a process which identifies the phases of development and products produced as a result of each phase for a rapid prototyping process to be used to develop requirements for the onboard payload displays for Space Station Freedom. The functional components of a dynamic prototyping environment in which this process can be carried out is also discussed. Some of the central questions which are answered here include: How does one go from specifications to an actual prototype? How is a prototype evaluated? How is usability defined and thus measured? How do we use the information from evaluation in redesign of an interface? and Are there techniques which allow for convergence on a design?
Yousef, Abdulaziz; Moghadam Charkari, Nasrollah
2013-11-07
Protein-Protein interaction (PPI) is one of the most important data in understanding the cellular processes. Many interesting methods have been proposed in order to predict PPIs. However, the methods which are based on the sequence of proteins as a prior knowledge are more universal. In this paper, a sequence-based, fast, and adaptive PPI prediction method is introduced to assign two proteins to an interaction class (yes, no). First, in order to improve the presentation of the sequences, twelve physicochemical properties of amino acid have been used by different representation methods to transform the sequence of protein pairs into different feature vectors. Then, for speeding up the learning process and reducing the effect of noise PPI data, principal component analysis (PCA) is carried out as a proper feature extraction algorithm. Finally, a new and adaptive Learning Vector Quantization (LVQ) predictor is designed to deal with different models of datasets that are classified into balanced and imbalanced datasets. The accuracy of 93.88%, 90.03%, and 89.72% has been found on S. cerevisiae, H. pylori, and independent datasets, respectively. The results of various experiments indicate the efficiency and validity of the method. © 2013 Published by Elsevier Ltd.
Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag
2015-01-01
Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729
Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.
Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw
2018-07-01
Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.
DyNAMiC Workbench: an integrated development environment for dynamic DNA nanotechnology
Grun, Casey; Werfel, Justin; Zhang, David Yu; Yin, Peng
2015-01-01
Dynamic DNA nanotechnology provides a promising avenue for implementing sophisticated assembly processes, mechanical behaviours, sensing and computation at the nanoscale. However, design of these systems is complex and error-prone, because the need to control the kinetic pathway of a system greatly increases the number of design constraints and possible failure modes for the system. Previous tools have automated some parts of the design workflow, but an integrated solution is lacking. Here, we present software implementing a three ‘tier’ design process: a high-level visual programming language is used to describe systems, a molecular compiler builds a DNA implementation and nucleotide sequences are generated and optimized. Additionally, our software includes tools for analysing and ‘debugging’ the designs in silico, and for importing/exporting designs to other commonly used software systems. The software we present is built on many existing pieces of software, but is integrated into a single package—accessible using a Web-based interface at http://molecular-systems.net/workbench. We hope that the deep integration between tools and the flexibility of this design process will lead to better experimental results, fewer experimental design iterations and the development of more complex DNA nanosystems. PMID:26423437
Levels of integration in cognitive control and sequence processing in the prefrontal cortex.
Bahlmann, Jörg; Korb, Franziska M; Gratton, Caterina; Friederici, Angela D
2012-01-01
Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex.
Levels of Integration in Cognitive Control and Sequence Processing in the Prefrontal Cortex
Bahlmann, Jörg; Korb, Franziska M.; Gratton, Caterina; Friederici, Angela D.
2012-01-01
Cognitive control is necessary to flexibly act in changing environments. Sequence processing is needed in language comprehension to build the syntactic structure in sentences. Functional imaging studies suggest that sequence processing engages the left ventrolateral prefrontal cortex (PFC). In contrast, cognitive control processes additionally recruit bilateral rostral lateral PFC regions. The present study aimed to investigate these two types of processes in one experimental paradigm. Sequence processing was manipulated using two different sequencing rules varying in complexity. Cognitive control was varied with different cue-sets that determined the choice of a sequencing rule. Univariate analyses revealed distinct PFC regions for the two types of processing (i.e. sequence processing: left ventrolateral PFC and cognitive control processing: bilateral dorsolateral and rostral PFC). Moreover, in a common brain network (including left lateral PFC and intraparietal sulcus) no interaction between sequence and cognitive control processing was observed. In contrast, a multivariate pattern analysis revealed an interaction of sequence and cognitive control processing, such that voxels in left lateral PFC and parietal cortex showed different tuning functions for tasks involving different sequencing and cognitive control demands. These results suggest that the difference between the process of rule selection (i.e. cognitive control) and the process of rule-based sequencing (i.e. sequence processing) find their neuronal underpinnings in distinct activation patterns in lateral PFC. Moreover, the combination of rule selection and rule sequencing can shape the response of neurons in lateral PFC and parietal cortex. PMID:22952762
Scaling up the 454 Titanium Library Construction and Pooling of Barcoded Libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phung, Wilson; Hack, Christopher; Shapiro, Harris
2009-03-23
We have been developing a high throughput 454 library construction process at the Joint Genome Institute to meet the needs of de novo sequencing a large number of microbial and eukaryote genomes, EST, and metagenome projects. We have been focusing efforts in three areas: (1) modifying the current process to allow the construction of 454 standard libraries on a 96-well format; (2) developing a robotic platform to perform the 454 library construction; and (3) designing molecular barcodes to allow pooling and sorting of many different samples. In the development of a high throughput process to scale up the number ofmore » libraries by adapting the process to a 96-well plate format, the key process change involves the replacement of gel electrophoresis for size selection with Solid Phase Reversible Immobilization (SPRI) beads. Although the standard deviation of the insert sizes increases, the overall quality sequence and distribution of the reads in the genome has not changed. The manual process of constructing 454 shotgun libraries on 96-well plates is a time-consuming, labor-intensive, and ergonomically hazardous process; we have been experimenting to program a BioMek robot to perform the library construction. This will not only enable library construction to be completed in a single day, but will also minimize any ergonomic risk. In addition, we have implemented a set of molecular barcodes (AKA Multiple Identifiers or MID) and a pooling process that allows us to sequence many targets simultaneously. Here we will present the testing of pooling a set of selected fosmids derived from the endomycorrhizal fungus Glomus intraradices. By combining the robotic library construction process and the use of molecular barcodes, it is now possible to sequence hundreds of fosmids that represent a minimal tiling path of this genome. Here we present the progress and the challenges of developing these scaled-up processes.« less
Ebbie: automated analysis and storage of small RNA cloning data using a dynamic web server
Ebhardt, H Alexander; Wiese, Kay C; Unrau, Peter J
2006-01-01
Background DNA sequencing is used ubiquitously: from deciphering genomes[1] to determining the primary sequence of small RNAs (smRNAs) [2-5]. The cloning of smRNAs is currently the most conventional method to determine the actual sequence of these important regulators of gene expression. Typical smRNA cloning projects involve the sequencing of hundreds to thousands of smRNA clones that are delimited at their 5' and 3' ends by fixed sequence regions. These primers result from the biochemical protocol used to isolate and convert the smRNA into clonable PCR products. Recently we completed a smRNA cloning project involving tobacco plants, where analysis was required for ~700 smRNA sequences[6]. Finding no easily accessible research tool to enter and analyze smRNA sequences we developed Ebbie to assist us with our study. Results Ebbie is a semi-automated smRNA cloning data processing algorithm, which initially searches for any substring within a DNA sequencing text file, which is flanked by two constant strings. The substring, also termed smRNA or insert, is stored in a MySQL and BlastN database. These inserts are then compared using BlastN to locally installed databases allowing the rapid comparison of the insert to both the growing smRNA database and to other static sequence databases. Our laboratory used Ebbie to analyze scores of DNA sequencing data originating from an smRNA cloning project[6]. Through its built-in instant analysis of all inserts using BlastN, we were able to quickly identify 33 groups of smRNAs from ~700 database entries. This clustering allowed the easy identification of novel and highly expressed clusters of smRNAs. Ebbie is available under GNU GPL and currently implemented on Conclusion Ebbie was designed for medium sized smRNA cloning projects with about 1,000 database entries [6-8].Ebbie can be used for any type of sequence analysis where two constant primer regions flank a sequence of interest. The reliable storage of inserts, and their annotation in a MySQL database, BlastN[9] comparison of new inserts to dynamic and static databases make it a powerful new tool in any laboratory using DNA sequencing. Ebbie also prevents manual mistakes during the excision process and speeds up annotation and data-entry. Once the server is installed locally, its access can be restricted to protect sensitive new DNA sequencing data. Ebbie was primarily designed for smRNA cloning projects, but can be applied to a variety of RNA and DNA cloning projects[2,3,10,11]. PMID:16584563
Observation sequences and onboard data processing of Planet-C
NASA Astrophysics Data System (ADS)
Suzuki, M.; Imamura, T.; Nakamura, M.; Ishi, N.; Ueno, M.; Hihara, H.; Abe, T.; Yamada, T.
Planet-C or VCO Venus Climate Orbiter will carry 5 cameras IR1 IR 1micrometer camera IR2 IR 2micrometer camera UVI UV Imager LIR Long-IR camera and LAC Lightning and Airglow Camera in the UV-IR region to investigate atmospheric dynamics of Venus During 30 hr orbiting designed to quasi-synchronize to the super rotation of the Venus atmosphere 3 groups of scientific observations will be carried out i image acquisition of 4 cameras IR1 IR2 UVI LIR 20 min in 2 hrs ii LAC operation only when VCO is within Venus shadow and iii radio occultation These observation sequences will define the scientific outputs of VCO program but the sequences must be compromised with command telemetry downlink and thermal power conditions For maximizing science data downlink it must be well compressed and the compression efficiency and image quality have the significant scientific importance in the VCO program Images of 4 cameras IR1 2 and UVI 1Kx1K and LIR 240x240 will be compressed using JPEG2000 J2K standard J2K is selected because of a no block noise b efficiency c both reversible and irreversible d patent loyalty free and e already implemented as academic commercial software ICs and ASIC logic designs Data compression efficiencies of J2K are about 0 3 reversible and 0 1 sim 0 01 irreversible The DE Digital Electronics unit which controls 4 cameras and handles onboard data processing compression is under concept design stage It is concluded that the J2K data compression logics circuits using space
Geostationary Lightning Mapper: Lessons Learned from Post Launch Test
NASA Astrophysics Data System (ADS)
Edgington, S.; Tillier, C. E.; Demroff, H.; VanBezooijen, R.; Christian, H. J., Jr.; Bitzer, P. M.
2017-12-01
Pre-launch calibration and algorithm design for the GOES Geostationary Lightning Mapper resulted in a successful and trouble-free on-orbit activation and post-launch test sequence. Within minutes of opening the GLM aperture door on January 4th, 2017, lightning was detected across the entire field of view. During the six-month post-launch test period, numerous processing parameters on board the instrument and in the ground processing algorithms were fine-tuned. Demonstrated on-orbit performance exceeded pre-launch predictions. We provide an overview of the ground calibration sequence, on-orbit tuning of the instrument, tuning of the ground processing algorithms (event filtering and navigation). We also touch on new insights obtained from analysis of a large and growing archive of raw GLM data, containing 3e8 flash detections derived from over 1e10 full-disk images of the Earth.
TaxI: a software tool for DNA barcoding using distance methods
Steinke, Dirk; Vences, Miguel; Salzburger, Walter; Meyer, Axel
2005-01-01
DNA barcoding is a promising approach to the diagnosis of biological diversity in which DNA sequences serve as the primary key for information retrieval. Most existing software for evolutionary analysis of DNA sequences was designed for phylogenetic analyses and, hence, those algorithms do not offer appropriate solutions for the rapid, but precise analyses needed for DNA barcoding, and are also unable to process the often large comparative datasets. We developed a flexible software tool for DNA taxonomy, named TaxI. This program calculates sequence divergences between a query sequence (taxon to be barcoded) and each sequence of a dataset of reference sequences defined by the user. Because the analysis is based on separate pairwise alignments this software is also able to work with sequences characterized by multiple insertions and deletions that are difficult to align in large sequence sets (i.e. thousands of sequences) by multiple alignment algorithms because of computational restrictions. Here, we demonstrate the utility of this approach with two datasets of fish larvae and juveniles from Lake Constance and juvenile land snails under different models of sequence evolution. Sets of ribosomal 16S rRNA sequences, characterized by multiple indels, performed as good as or better than cox1 sequence sets in assigning sequences to species, demonstrating the suitability of rRNA genes for DNA barcoding. PMID:16214755
Scar-less multi-part DNA assembly design automation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillson, Nathan J.
The present invention provides a method of a method of designing an implementation of a DNA assembly. In an exemplary embodiment, the method includes (1) receiving a list of DNA sequence fragments to be assembled together and an order in which to assemble the DNA sequence fragments, (2) designing DNA oligonucleotides (oligos) for each of the DNA sequence fragments, and (3) creating a plan for adding flanking homology sequences to each of the DNA oligos. In an exemplary embodiment, the method includes (1) receiving a list of DNA sequence fragments to be assembled together and an order in which tomore » assemble the DNA sequence fragments, (2) designing DNA oligonucleotides (oligos) for each of the DNA sequence fragments, and (3) creating a plan for adding optimized overhang sequences to each of the DNA oligos.« less
Sachsenröder, Jana; Twardziok, Sven; Hammerl, Jens A; Janczyk, Pawel; Wrede, Paul; Hertwig, Stefan; Johne, Reimar
2012-01-01
Animal faeces comprise a community of many different microorganisms including bacteria and viruses. Only scarce information is available about the diversity of viruses present in the faeces of pigs. Here we describe a protocol, which was optimized for the purification of the total fraction of viral particles from pig faeces. The genomes of the purified DNA and RNA viruses were simultaneously amplified by PCR and subjected to deep sequencing followed by bioinformatic analyses. The efficiency of the method was monitored using a process control consisting of three bacteriophages (T4, M13 and MS2) with different morphology and genome types. Defined amounts of the bacteriophages were added to the sample and their abundance was assessed by quantitative PCR during the preparation procedure. The procedure was applied to a pooled faecal sample of five pigs. From this sample, 69,613 sequence reads were generated. All of the added bacteriophages were identified by sequence analysis of the reads. In total, 7.7% of the reads showed significant sequence identities with published viral sequences. They mainly originated from bacteriophages (73.9%) and mammalian viruses (23.9%); 0.8% of the sequences showed identities to plant viruses. The most abundant detected porcine viruses were kobuvirus, rotavirus C, astrovirus, enterovirus B, sapovirus and picobirnavirus. In addition, sequences with identities to the chimpanzee stool-associated circular ssDNA virus were identified. Whole genome analysis indicates that this virus, tentatively designated as pig stool-associated circular ssDNA virus (PigSCV), represents a novel pig virus. The established protocol enables the simultaneous detection of DNA and RNA viruses in pig faeces including the identification of so far unknown viruses. It may be applied in studies investigating aetiology, epidemiology and ecology of diseases. The implemented process control serves as quality control, ensures comparability of the method and may be used for further method optimization.
The Design Manager's Aid for Intelligent Decomposition (DeMAID)
NASA Technical Reports Server (NTRS)
Rogers, James L.
1994-01-01
Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.
Davey, James A; Chica, Roberto A
2015-04-01
Computational protein design (CPD) predictions are highly dependent on the structure of the input template used. However, it is unclear how small differences in template geometry translate to large differences in stability prediction accuracy. Herein, we explored how structural changes to the input template affect the outcome of stability predictions by CPD. To do this, we prepared alternate templates by Rotamer Optimization followed by energy Minimization (ROM) and used them to recapitulate the stability of 84 protein G domain β1 mutant sequences. In the ROM process, side-chain rotamers for wild-type (WT) or mutant sequences are optimized on crystal or nuclear magnetic resonance (NMR) structures prior to template minimization, resulting in alternate structures termed ROM templates. We show that use of ROM templates prepared from sequences known to be stable results predominantly in improved prediction accuracy compared to using the minimized crystal or NMR structures. Conversely, ROM templates prepared from sequences that are less stable than the WT reduce prediction accuracy by increasing the number of false positives. These observed changes in prediction outcomes are attributed to differences in side-chain contacts made by rotamers in ROM templates. Finally, we show that ROM templates prepared from sequences that are unfolded or that adopt a nonnative fold result in the selective enrichment of sequences that are also unfolded or that adopt a nonnative fold, respectively. Our results demonstrate the existence of a rotamer bias caused by the input template that can be harnessed to skew predictions toward sequences displaying desired characteristics. © 2014 The Protein Society.
Lakshmanan, Anupama; Cheong, Daniel W; Accardo, Angelo; Di Fabrizio, Enzo; Riekel, Christian; Hauser, Charlotte A E
2013-01-08
The self-assembly of abnormally folded proteins into amyloid fibrils is a hallmark of many debilitating diseases, from Alzheimer's and Parkinson diseases to prion-related disorders and diabetes type II. However, the fundamental mechanism of amyloid aggregation remains poorly understood. Core sequences of four to seven amino acids within natural amyloid proteins that form toxic fibrils have been used to study amyloidogenesis. We recently reported a class of systematically designed ultrasmall peptides that self-assemble in water into cross-β-type fibers. Here we compare the self-assembly of these peptides with natural core sequences. These include core segments from Alzheimer's amyloid-β, human amylin, and calcitonin. We analyzed the self-assembly process using circular dichroism, electron microscopy, X-ray diffraction, rheology, and molecular dynamics simulations. We found that the designed aliphatic peptides exhibited a similar self-assembly mechanism to several natural sequences, with formation of α-helical intermediates being a common feature. Interestingly, the self-assembly of a second core sequence from amyloid-β, containing the diphenylalanine motif, was distinctly different from all other examined sequences. The diphenylalanine-containing sequence formed β-sheet aggregates without going through the α-helical intermediate step, giving a unique fiber-diffraction pattern and simulation structure. Based on these results, we propose a simplified aliphatic model system to study amyloidosis. Our results provide vital insight into the nature of early intermediates formed and suggest that aromatic interactions are not as important in amyloid formation as previously postulated. This information is necessary for developing therapeutic drugs that inhibit and control amyloid formation.
ULSGEN (Uplink Summary Generator)
NASA Technical Reports Server (NTRS)
Wang, Y.-F.; Schrock, M.; Reeve, T.; Nguyen, K.; Smith, B.
2014-01-01
Uplink is an important part of spacecraft operations. Ensuring the accuracy of uplink content is essential to mission success. Before commands are radiated to the spacecraft, the command and sequence must be reviewed and verified by various teams. In most cases, this process requires collecting the command data, reviewing the data during a command conference meeting, and providing physical signatures by designated members of various teams to signify approval of the data. If commands or sequences are disapproved for some reason, the whole process must be restarted. Recording data and decision history is important for traceability reasons. Given that many steps and people are involved in this process, an easily accessible software tool for managing the process is vital to reducing human error which could result in uplinking incorrect data to the spacecraft. An uplink summary generator called ULSGEN was developed to assist this uplink content approval process. ULSGEN generates a web-based summary of uplink file content and provides an online review process. Spacecraft operations personnel view this summary as a final check before actual radiation of the uplink data. .
Toward a Symphony of Reactivity: Cascades Involving Catalysis and Sigmatropic Rearrangements
Jones, Amanda C.; May, Jeremy A.; Sarpong, Richmond; Stoltz, Brian M.
2014-01-01
Catalysis and synthesis are intimately linked in modern organic chemistry. The synthesis of complex molecules is an ever evolving area of science. In many regards, the inherent beauty associated with a synthetic sequence can be linked to a certain combination of the creativity with which a sequence is designed and the overall efficiency with which the ultimate process is performed. In synthesis, as in other endeavors, beauty is very much in the eyes of the beholder.[**] It is with this in mind that we will attempt to review an area of synthesis that has fascinated us and that we find extraordinarily beautiful, namely the combination of catalysis and sigmatropic rearrangements in consecutive and cascade sequences. PMID:24677683
Evaluating and Redesigning Teaching Learning Sequences at the Introductory Physics Level
ERIC Educational Resources Information Center
Guisasola, Jenaro; Zuza, Kristina; Ametller, Jaume; Gutierrez-Berraondo, José
2017-01-01
In this paper we put forward a proposal for the design and evaluation of teaching and learning sequences in upper secondary school and university. We will connect our proposal with relevant contributions on the design of teaching sequences, ground it on the design-based research methodology, and discuss how teaching and learning sequences designed…
Unified Deep Learning Architecture for Modeling Biology Sequence.
Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang
2017-10-09
Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.
Bazhan, S I; Karpenko, L I; Ilyicheva, T N; Belavin, P A; Seregin, S V; Danilyuk, N K; Antonets, D V; Ilyichev, A A
2010-04-01
Advances in defining HIV-1 CD8+ T cell epitopes and understanding endogenous MHC class I antigen processing enable the rational design of polyepitope vaccines for eliciting broadly targeted CD8+ T cell responses to HIV-1. Here we describe the construction and comparison of experimental DNA vaccines consisting of ten selected HLA-A2 epitopes from the major HIV-1 antigens Env, Gag, Pol, Nef, and Vpr. The immunogenicity of designed gene constructs was assessed after double DNA prime, single vaccinia virus boost immunization of HLA-A2 transgenic mice. We compared a number of parameters including different strategies for fusing ubiquitin to the polyepitope and including spacer sequences between epitopes to optimize proteasome liberation and TAP transport. It was demonstrated that the vaccine construct that induced in vitro the largest number of [peptide-MHC class I] complexes was also the most immunogenic in the animal experiments. This most immunogenic vaccine construct contained the N-terminal ubiquitin for targeting the polyepitope to the proteasome and included both proteasome liberation and TAP transport optimized spacer sequences that flanked the epitopes within the polyepitope construct. The immunogenicity of determinants was strictly related to their affinities for HLA-A2. Our finding supports the concept of rational vaccine design based on detailed knowledge of antigen processing. Copyright 2010 Elsevier Ltd. All rights reserved.
Sequencing batch-reactor control using Gaussian-process models.
Kocijan, Juš; Hvala, Nadja
2013-06-01
This paper presents a Gaussian-process (GP) model for the design of sequencing batch-reactor (SBR) control for wastewater treatment. The GP model is a probabilistic, nonparametric model with uncertainty predictions. In the case of SBR control, it is used for the on-line optimisation of the batch-phases duration. The control algorithm follows the course of the indirect process variables (pH, redox potential and dissolved oxygen concentration) and recognises the characteristic patterns in their time profile. The control algorithm uses GP-based regression to smooth the signals and GP-based classification for the pattern recognition. When tested on the signals from an SBR laboratory pilot plant, the control algorithm provided a satisfactory agreement between the proposed completion times and the actual termination times of the biodegradation processes. In a set of tested batches the final ammonia and nitrate concentrations were below 1 and 0.5 mg L(-1), respectively, while the aeration time was shortened considerably. Copyright © 2013 Elsevier Ltd. All rights reserved.
An, Z; Tang, Z; Ma, B; Mason, A S; Guo, Y; Yin, J; Gao, C; Wei, L; Li, J; Fu, D
2014-07-01
Although many studies have shown that transposable element (TE) activation is induced by hybridisation and polyploidisation in plants, much less is known on how different types of TE respond to hybridisation, and the impact of TE-associated sequences on gene function. We investigated the frequency and regularity of putative transposon activation for different types of TE, and determined the impact of TE-associated sequence variation on the genome during allopolyploidisation. We designed different types of TE primers and adopted the Inter-Retrotransposon Amplified Polymorphism (IRAP) method to detect variation in TE-associated sequences during the process of allopolyploidisation between Brassica rapa (AA) and Brassica oleracea (CC), and in successive generations of self-pollinated progeny. In addition, fragments with TE insertions were used to perform Blast2GO analysis to characterise the putative functions of the fragments with TE insertions. Ninety-two primers amplifying 548 loci were used to detect variation in sequences associated with four different orders of TE sequences. TEs could be classed in ascending frequency into LTR-REs, TIRs, LINEs, SINEs and unknown TEs. The frequency of novel variation (putative activation) detected for the four orders of TEs was highest from the F1 to F2 generations, and lowest from the F2 to F3 generations. Functional annotation of sequences with TE insertions showed that genes with TE insertions were mainly involved in metabolic processes and binding, and preferentially functioned in organelles. TE variation in our study severely disturbed the genetic compositions of the different generations, resulting in inconsistencies in genetic clustering. Different types of TE showed different patterns of variation during the process of allopolyploidisation. © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.
Breaking the computational barriers of pairwise genome comparison.
Torreno, Oscar; Trelles, Oswaldo
2015-08-11
Conventional pairwise sequence comparison software algorithms are being used to process much larger datasets than they were originally designed for. This can result in processing bottlenecks that limit software capabilities or prevent full use of the available hardware resources. Overcoming the barriers that limit the efficient computational analysis of large biological sequence datasets by retrofitting existing algorithms or by creating new applications represents a major challenge for the bioinformatics community. We have developed C libraries for pairwise sequence comparison within diverse architectures, ranging from commodity systems to high performance and cloud computing environments. Exhaustive tests were performed using different datasets of closely- and distantly-related sequences that span from small viral genomes to large mammalian chromosomes. The tests demonstrated that our solution is capable of generating high quality results with a linear-time response and controlled memory consumption, being comparable or faster than the current state-of-the-art methods. We have addressed the problem of pairwise and all-versus-all comparison of large sequences in general, greatly increasing the limits on input data size. The approach described here is based on a modular out-of-core strategy that uses secondary storage to avoid reaching memory limits during the identification of High-scoring Segment Pairs (HSPs) between the sequences under comparison. Software engineering concepts were applied to avoid intermediate result re-calculation, to minimise the performance impact of input/output (I/O) operations and to modularise the process, thus enhancing application flexibility and extendibility. Our computationally-efficient approach allows tasks such as the massive comparison of complete genomes, evolutionary event detection, the identification of conserved synteny blocks and inter-genome distance calculations to be performed more effectively.
Implementing Distributed Operations: A Comparison of Two Deep Space Missions
NASA Technical Reports Server (NTRS)
Mishkin, Andrew; Larsen, Barbara
2006-01-01
Two very different deep space exploration missions--Mars Exploration Rover and Cassini--have made use of distributed operations for their science teams. In the case of MER, the distributed operations capability was implemented only after the prime mission was completed, as the rovers continued to operate well in excess of their expected mission lifetimes; Cassini, designed for a mission of more than ten years, had planned for distributed operations from its inception. The rapid command turnaround timeline of MER, as well as many of the operations features implemented to support it, have proven to be conducive to distributed operations. These features include: a single science team leader during the tactical operations timeline, highly integrated science and engineering teams, processes and file structures designed to permit multiple team members to work in parallel to deliver sequencing products, web-based spacecraft status and planning reports for team-wide access, and near-elimination of paper products from the operations process. Additionally, MER has benefited from the initial co-location of its entire operations team, and from having a single Principal Investigator, while Cassini operations have had to reconcile multiple science teams distributed from before launch. Cassini has faced greater challenges in implementing effective distributed operations. Because extensive early planning is required to capture science opportunities on its tour and because sequence development takes significantly longer than sequence execution, multiple teams are contributing to multiple sequences concurrently. The complexity of integrating inputs from multiple teams is exacerbated by spacecraft operability issues and resource contention among the teams, each of which has their own Principal Investigator. Finally, much of the technology that MER has exploited to facilitate distributed operations was not available when the Cassini ground system was designed, although later adoption of web-based and telecommunication tools has been critical to the success of Cassini operations.
Post-test navigation data analysis techniques for the shuttle ALT
NASA Technical Reports Server (NTRS)
1975-01-01
Postflight test analysis data processing techniques for shuttle approach and landing tests (ALT) navigation data are defined. Postfight test processor requirements are described along with operational and design requirements, data input requirements, and software test requirements. The postflight test data processing is described based on the natural test sequence: quick-look analysis, postflight navigation processing, and error isolation processing. Emphasis is placed on the tradeoffs that must remain open and subject to analysis until final definition is achieved in the shuttle data processing system and the overall ALT plan. A development plan for the implementation of the ALT postflight test navigation data processing system is presented. Conclusions are presented.
MiDAS: the field guide to the microbes of activated sludge.
McIlroy, Simon Jon; Saunders, Aaron Marc; Albertsen, Mads; Nierychlo, Marta; McIlroy, Bianca; Hansen, Aviaja Anna; Karst, Søren Michael; Nielsen, Jeppe Lund; Nielsen, Per Halkjær
2015-01-01
The Microbial Database for Activated Sludge (MiDAS) field guide is a freely available online resource linking the identity of abundant and process critical microorganisms in activated sludge wastewater treatment systems to available data related to their functional importance. Phenotypic properties of some of these genera are described, but most are known only from sequence data. The MiDAS taxonomy is a manual curation of the SILVA taxonomy that proposes a name for all genus-level taxa observed to be abundant by large-scale 16 S rRNA gene amplicon sequencing of full-scale activated sludge communities. The taxonomy can be used to classify unknown sequences, and the online MiDAS field guide links the identity to the available information about their morphology, diversity, physiology and distribution. The use of a common taxonomy across the field will provide a solid foundation for the study of microbial ecology of the activated sludge process and related treatment processes. The online MiDAS field guide is a collaborative workspace intended to facilitate a better understanding of the ecology of activated sludge and related treatment processes--knowledge that will be an invaluable resource for the optimal design and operation of these systems. © The Author(s) 2015. Published by Oxford University Press.
Keeping it together: Semantic coherence stabilizes phonological sequences in short-term memory.
Savill, Nicola; Ellis, Rachel; Brooke, Emma; Koa, Tiffany; Ferguson, Suzie; Rojas-Rodriguez, Elena; Arnold, Dominic; Smallwood, Jonathan; Jefferies, Elizabeth
2018-04-01
Our ability to hold a sequence of speech sounds in mind, in the correct configuration, supports many aspects of communication, but the contribution of conceptual information to this basic phonological capacity remains controversial. Previous research has shown modest and inconsistent benefits of meaning on phonological stability in short-term memory, but these studies were based on sets of unrelated words. Using a novel design, we examined the immediate recall of sentence-like sequences with coherent meaning, alongside both standard word lists and mixed lists containing words and nonwords. We found, and replicated, substantial effects of coherent meaning on phoneme-level accuracy: The phonemes of both words and nonwords within conceptually coherent sequences were more likely to be produced together and in the correct order. Since nonwords do not exist as items in long-term memory, the semantic enhancement of phoneme-level recall for both item types cannot be explained by a lexically based item reconstruction process employed at the point of retrieval ("redintegration"). Instead, our data show, for naturalistic input, that when meaning emerges from the combination of words, the phonological traces that support language are reinforced by a semantic-binding process that has been largely overlooked by past short-term memory research.
ED-WAVE tool design approach: Case of a textile wastewater treatment plant in Blantyre, Malawi
NASA Astrophysics Data System (ADS)
Chipofya, V.; Kraslawski, A.; Avramenko, Y.
The ED-WAVE tool is a PC based package for imparting training on wastewater treatment technologies. The system consists of four modules viz. Reference Library, Process Builder, Case Study Manager, and Treatment Adviser. The principles of case-based design and case-based reasoning as applied in the ED-WAVE tool are utilised in this paper to evaluate the design approach of the wastewater treatment plant at Mapeto David Whitehead & Sons (MDW&S) textile and garments factory, Blantyre, Malawi. The case being compared with MDW&S in the ED-WAVE tool is Textile Case 4 in Sri Lanka (2003). Equalisation, coagulation and rotating biological contactors is the sequencing of treatment units at Textile Case 4 in Sri Lanka. Screening, oxidation ditches and sedimentation is the sequencing of treatment units at MDW&S textile and garments factory. The study suggests that aerobic biological treatment is necessary in the treatment of wastewater from a textile and garments factory. MDW&S incorporates a sedimentation process which is necessary for the removal of settleable matter before the effluent is discharged to the municipal wastewater treatment plant. The study confirmed the practical use of the ED-WAVE tool in the design of wastewater treatment systems, where after encountering a new situation; already collected decision scenarios (cases) are invoked and modified in order to arrive at a particular design alternative. What is necessary, however, is to appropriately modify the case arrived at through the Case Study Manager in order to come up with a design appropriate to the local situation taking into account technical, socio-economic and environmental aspects.
Ma, Jin-Liang; Yin, Bin-Cheng; Le, Huynh-Nhu; Ye, Bang-Ce
2015-06-17
We have developed a label-free method for sequence-specific DNA detection based on surface plasmon enhanced energy transfer (SPEET) process between fluorescent DNA/AgNC string and gold nanoparticles (AuNPs). DNA/AgNC string, prepared by a single-stranded DNA template encoded two emitter-nucleation sequences at its termini and an oligo spacer in the middle, was rationally designed to produce bright fluorescence emission. The proposed method takes advantage of two strategies. The first one is the difference in binding properties of single-stranded DNA (ssDNA) and double-stranded DNA (dsDNA) toward AuNPs. The second one is SPEET process between fluorescent DNA/AgNC string and AuNPs, in which fluorescent DNA/AgNC string can be spontaneously adsorbed onto the surface of AuNPs and correspondingly AuNPs serve as "nanoquencher" to quench the fluorescence of DNA/AgNC string. In the presence of target DNA, the sensing probe hybridized with target DNA to form duplex DNA, leading to a salt-induced AuNP aggregation and subsequently weakened SPEET process between fluorescent DNA/AgNC string and AuNPs. A red-to-blue color change of AuNPs and a concomitant fluorescence increase were clearly observed in the sensing system, which had a concentration dependent manner with specific DNA. The proposed method achieved a detection limit of ∼2.5 nM, offering the following merits of simple design, convenient operation, and low experimental cost because of no chemical modification, organic dye, enzymatic reaction, or separation procedure involved.
Fagot, Joël; De Lillo, Carlo
2011-12-01
Two experiments assessed if non-human primates can be meaningfully compared to humans in a non-verbal test of serial recall. A procedure was used that was derived from variations of the Corsi test, designed to test the effects of sequence structure and movement path length in humans. Two baboons were tested in Experiment 1. The monkeys showed several attributes of human serial recall. These included an easier recall of sequences with a shorter number of items and of sequences characterized by a shorter path length when the number of items was kept constant. However, the accuracy and speed of processing did not indicate that the monkeys were able to benefit from the spatiotemporal structure of sequences. Humans tested in Experiment 2 showed a quantitatively longer memory span, and, in contrast with monkeys, benefitted from sequence structure. The results are discussed in relation to differences in how human and non-human primates segment complex visual patterns. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
LinkFinder: An expert system that constructs phylogenic trees
NASA Technical Reports Server (NTRS)
Inglehart, James; Nelson, Peter C.
1991-01-01
An expert system has been developed using the C Language Integrated Production System (CLIPS) that automates the process of constructing DNA sequence based phylogenies (trees or lineages) that indicate evolutionary relationships. LinkFinder takes as input homologous DNA sequences from distinct individual organisms. It measures variations between the sequences, selects appropriate proportionality constants, and estimates the time that has passed since each pair of organisms diverged from a common ancestor. It then designs and outputs a phylogenic map summarizing these results. LinkFinder can find genetic relationships between different species, and between individuals of the same species, including humans. It was designed to take advantage of the vast amount of sequence data being produced by the Genome Project, and should be of value to evolution theorists who wish to utilize this data, but who have no formal training in molecular genetics. Evolutionary theory holds that distinct organisms carrying a common gene inherited that gene from a common ancestor. Homologous genes vary from individual to individual and species to species, and the amount of variation is now believed to be directly proportional to the time that has passed since divergence from a common ancestor. The proportionality constant must be determined experimentally; it varies considerably with the types of organisms and DNA molecules under study. Given an appropriate constant, and the variation between two DNA sequences, a simple linear equation gives the divergence time.
A Computer Approach to Mathematics Curriculum Developments Debugging
ERIC Educational Resources Information Center
Martínez-Zarzuelo, Angélica; Roanes-Lozano, Eugenio; Fernández-Díaz, José
2016-01-01
Sequencing contents is of great importance for instructional design within the teaching planning processes. We believe that it is key for a meaningful learning. Therefore, we propose to formally establish a partial order relation among the contents. We have chosen the binary relation "to be a prerequisite" for that purpose. We have…
myPhyloDB: a local web server for the storage and analysis of metagenomics data
USDA-ARS?s Scientific Manuscript database
myPhyloDB is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of metagenomics data. MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all availab...
Quality Assessment in the Blog Space
ERIC Educational Resources Information Center
Schaal, Markus; Fidan, Guven; Muller, Roland M.; Dagli, Orhan
2010-01-01
Purpose: The purpose of this paper is the presentation of a new method for blog quality assessment. The method uses the temporal sequence of link creation events between blogs as an implicit source for the collective tacit knowledge of blog authors about blog quality. Design/methodology/approach: The blog data are processed by the novel method for…
Purposeful Design of Formal Laboratory Instruction as a Springboard to Research Participation
ERIC Educational Resources Information Center
Cartrette, David P.; Miller, Matthew L.
2013-01-01
An innovative first- and second-year laboratory course sequence is described. The goal of the instructional model is to introduce chemistry and biochemistry majors to the process of research participation earlier in their academic training. To achieve that goal, the instructional model incorporates significant hands-on experiences with chemical…
ERIC Educational Resources Information Center
Travers, Steven T.
2017-01-01
Many developmental mathematics programs at community colleges in recent years have undergone a process of redesign in an attempt increase the historical poor rate of student successful completion of required developmental coursework. Various curriculum and instructional design models that incorporate methods of avoiding and accelerating the…
From Organelle to Protein Gel: A 6-Wk Laboratory Project on Flagellar Proteins
ERIC Educational Resources Information Center
Mitchell, Beth Ferro; Graziano, Mary R.
2006-01-01
Research suggests that undergraduate students learn more from lab experiences that involve longer-term projects. We have developed a one-semester laboratory sequence aimed at sophomore-level undergraduates. In designing this curriculum, we focused on several educational objectives: 1) giving students a feel for the scientific research process, 2)…
Genetic variability and evolutionary dynamics of viruses of the family Closteroviridae
Rubio, Luis; Guerri, José; Moreno, Pedro
2013-01-01
RNA viruses have a great potential for genetic variation, rapid evolution and adaptation. Characterization of the genetic variation of viral populations provides relevant information on the processes involved in virus evolution and epidemiology and it is crucial for designing reliable diagnostic tools and developing efficient and durable disease control strategies. Here we performed an updated analysis of sequences available in Genbank and reviewed present knowledge on the genetic variability and evolutionary processes of viruses of the family Closteroviridae. Several factors have shaped the genetic structure and diversity of closteroviruses. (I) A strong negative selection seems to be responsible for the high genetic stability in space and time for some viruses. (2) Long distance migration, probably by human transport of infected propagative plant material, have caused that genetically similar virus isolates are found in distant geographical regions. (3) Recombination between divergent sequence variants have generated new genotypes and plays an important role for the evolution of some viruses of the family Closteroviridae. (4) Interaction between virus strains or between different viruses in mixed infections may alter accumulation of certain strains. (5) Host change or virus transmission by insect vectors induced changes in the viral population structure due to positive selection of sequence variants with higher fitness for host-virus or vector-virus interaction (adaptation) or by genetic drift due to random selection of sequence variants during the population bottleneck associated to the transmission process. PMID:23805130
Update on Genomic Databases and Resources at the National Center for Biotechnology Information.
Tatusova, Tatiana
2016-01-01
The National Center for Biotechnology Information (NCBI), as a primary public repository of genomic sequence data, collects and maintains enormous amounts of heterogeneous data. Data for genomes, genes, gene expressions, gene variation, gene families, proteins, and protein domains are integrated with the analytical, search, and retrieval resources through the NCBI website, text-based search and retrieval system, provides a fast and easy way to navigate across diverse biological databases.Comparative genome analysis tools lead to further understanding of evolution processes quickening the pace of discovery. Recent technological innovations have ignited an explosion in genome sequencing that has fundamentally changed our understanding of the biology of living organisms. This huge increase in DNA sequence data presents new challenges for the information management system and the visualization tools. New strategies have been designed to bring an order to this genome sequence shockwave and improve the usability of associated data.
End-User Evaluations of Semantic Web Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCool, Rob; Cowell, Andrew J.; Thurman, David A.
Stanford University's Knowledge Systems Laboratory (KSL) is working in partnership with Battelle Memorial Institute and IBM Watson Research Center to develop a suite of technologies for information extraction, knowledge representation & reasoning, and human-information interaction, in unison entitled 'Knowledge Associates for Novel Intelligence' (KANI). We have developed an integrated analytic environment composed of a collection of analyst associates, software components that aid the user at different stages of the information analysis process. An important part of our participatory design process has been to ensure our technologies and designs are tightly integrate with the needs and requirements of our end users,more » To this end, we perform a sequence of evaluations towards the end of the development process that ensure the technologies are both functional and usable. This paper reports on that process.« less
Coarse-grained sequences for protein folding and design.
Brown, Scott; Fawzi, Nicolas J; Head-Gordon, Teresa
2003-09-16
We present the results of sequence design on our off-lattice minimalist model in which no specification of native-state tertiary contacts is needed. We start with a sequence that adopts a target topology and build on it through sequence mutation to produce new sequences that comprise distinct members within a target fold class. In this work, we use the alpha/beta ubiquitin fold class and design two new sequences that, when characterized through folding simulations, reproduce the differences in folding mechanism seen experimentally for proteins L and G. The primary implication of this work is that patterning of hydrophobic and hydrophilic residues is the physical origin for the success of relative contact-order descriptions of folding, and that these physics-based potentials provide a predictive connection between free energy landscapes and amino acid sequence (the original protein folding problem). We present results of the sequence mapping from a 20- to the three-letter code for determining a sequence that folds into the WW domain topology to illustrate future extensions to protein design.
Coarse-grained sequences for protein folding and design
Brown, Scott; Fawzi, Nicolas J.; Head-Gordon, Teresa
2003-01-01
We present the results of sequence design on our off-lattice minimalist model in which no specification of native-state tertiary contacts is needed. We start with a sequence that adopts a target topology and build on it through sequence mutation to produce new sequences that comprise distinct members within a target fold class. In this work, we use the α/β ubiquitin fold class and design two new sequences that, when characterized through folding simulations, reproduce the differences in folding mechanism seen experimentally for proteins L and G. The primary implication of this work is that patterning of hydrophobic and hydrophilic residues is the physical origin for the success of relative contact-order descriptions of folding, and that these physics-based potentials provide a predictive connection between free energy landscapes and amino acid sequence (the original protein folding problem). We present results of the sequence mapping from a 20- to the three-letter code for determining a sequence that folds into the WW domain topology to illustrate future extensions to protein design. PMID:12963815
In silico Analysis of 2085 Clones from a Normalized Rat Vestibular Periphery 3′ cDNA Library
Roche, Joseph P.; Cioffi, Joseph A.; Kwitek, Anne E.; Erbe, Christy B.; Popper, Paul
2005-01-01
The inserts from 2400 cDNA clones isolated from a normalized Rattus norvegicus vestibular periphery cDNA library were sequenced and characterized. The Wackym-Soares vestibular 3′ cDNA library was constructed from the saccular and utricular maculae, the ampullae of all three semicircular canals and Scarpa's ganglia containing the somata of the primary afferent neurons, microdissected from 104 male and female rats. The inserts from 2400 randomly selected clones were sequenced from the 5′ end. Each sequence was analyzed using the BLAST algorithm compared to the Genbank nonredundant, rat genome, mouse genome and human genome databases to search for high homology alignments. Of the initial 2400 clones, 315 (13%) were found to be of poor quality and did not yield useful information, and therefore were eliminated from the analysis. Of the remaining 2085 sequences, 918 (44%) were found to represent 758 unique genes having useful annotations that were identified in databases within the public domain or in the published literature; these sequences were designated as known characterized sequences. 1141 sequences (55%) aligned with 1011 unique sequences had no useful annotations and were designated as known but uncharacterized sequences. Of the remaining 26 sequences (1%), 24 aligned with rat genomic sequences, but none matched previously described rat expressed sequence tags or mRNAs. No significant alignment to the rat or human genomic sequences could be found for the remaining 2 sequences. Of the 2085 sequences analyzed, 86% were singletons. The known, characterized sequences were analyzed with the FatiGO online data-mining tool (http://fatigo.bioinfo.cnio.es/) to identify level 5 biological process gene ontology (GO) terms for each alignment and to group alignments with similar or identical GO terms. Numerous genes were identified that have not been previously shown to be expressed in the vestibular system. Further characterization of the novel cDNA sequences may lead to the identification of genes with vestibular-specific functions. Continued analysis of the rat vestibular periphery transcriptome should provide new insights into vestibular function and generate new hypotheses. Physiological studies are necessary to further elucidate the roles of the identified genes and novel sequences in vestibular function. PMID:16103642
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Voss, T; Falkner, E; Ahorn, H; Krystek, E; Maurer-Fogy, I; Bodo, G; Hauptmann, R
1994-01-01
Human interferon-alpha 2c (IFN-alpha 2c) was produced in Escherichia coli under the control of the alkaline phosphatase promoter using a periplasmic expression system. Compared with other leader sequences, the heat-stable enterotoxin II leader of E. coli (STII) resulted in the highest rate of correct processing as judged by Western-blot analysis. The fermentation was designed as a batch-fed process in order to obtain a high yield of biomass. The processing rate of IFN-alpha 2c could be increased from 25% to more than 50% by shifting the fermentation pH from 7.0 to 6.7. IFN-alpha 2c extracted from the periplasm was purified by a new four-step chromatographic procedure. Whereas cytoplasmically produced IFN-alpha 2c does not have its full native structure, IFN-alpha 2c extracted from the periplasm was found to be correctly folded, as shown by c.d. spectroscopy. Peptide-map analysis in combination with m.s. revealed the correct formation of disulphide bridges. N-terminal sequence analysis showed complete removal of the leader sequence, creating the authentic N-terminus starting with cysteine. Images Figure 3 Figure 4 Figure 6 PMID:8141788
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Reading, Benjamin J; Chapman, Robert W; Schaff, Jennifer E; Scholl, Elizabeth H; Opperman, Charles H; Sullivan, Craig V
2012-02-21
The striped bass and its relatives (genus Morone) are important fisheries and aquaculture species native to estuaries and rivers of the Atlantic coast and Gulf of Mexico in North America. To open avenues of gene expression research on reproduction and breeding of striped bass, we generated a collection of expressed sequence tags (ESTs) from a complementary DNA (cDNA) library representative of their ovarian transcriptome. Sequences of a total of 230,151 ESTs (51,259,448 bp) were acquired by Roche 454 pyrosequencing of cDNA pooled from ovarian tissues obtained at all stages of oocyte growth, at ovulation (eggs), and during preovulatory atresia. Quality filtering of ESTs allowed assembly of 11,208 high-quality contigs ≥ 100 bp, including 2,984 contigs 500 bp or longer (average length 895 bp). Blastx comparisons revealed 5,482 gene orthologues (E-value < 10-3), of which 4,120 (36.7% of total contigs) were annotated with Gene Ontology terms (E-value < 10-6). There were 5,726 remaining unknown unique sequences (51.1% of total contigs). All of the high-quality EST sequences are available in the National Center for Biotechnology Information (NCBI) Short Read Archive (GenBank: SRX007394). Informative contigs were considered to be abundant if they were assembled from groups of ESTs comprising ≥ 0.15% of the total short read sequences (≥ 345 reads/contig). Approximately 52.5% of these abundant contigs were predicted to have predominant ovary expression through digital differential display in silico comparisons to zebrafish (Danio rerio) UniGene orthologues. Over 1,300 Gene Ontology terms from Biological Process classes of Reproduction, Reproductive process, and Developmental process were assigned to this collection of annotated contigs. This first large reference sequence database available for the ecologically and economically important temperate basses (genus Morone) provides a foundation for gene expression studies in these species. The predicted predominance of ovary gene expression and assignment of directly relevant Gene Ontology classes suggests a powerful utility of this dataset for analysis of ovarian gene expression related to fundamental questions of oogenesis. Additionally, a high definition Agilent 60-mer oligo ovary 'UniClone' microarray with 8 × 15,000 probe format has been designed based on this striped bass transcriptome (eArray Group: Striper Group, Design ID: 029004).
swga: a primer design toolkit for selective whole genome amplification.
Clarke, Erik L; Sundararaman, Sesh A; Seifert, Stephanie N; Bushman, Frederic D; Hahn, Beatrice H; Brisson, Dustin
2017-07-15
Population genomic analyses are often hindered by difficulties in obtaining sufficient numbers of genomes for analysis by DNA sequencing. Selective whole-genome amplification (SWGA) provides an efficient approach to amplify microbial genomes from complex backgrounds for sequence acquisition. However, the process of designing sets of primers for this method has many degrees of freedom and would benefit from an automated process to evaluate the vast number of potential primer sets. Here, we present swga , a program that identifies primer sets for SWGA and evaluates them for efficiency and selectivity. We used swga to design and test primer sets for the selective amplification of Wolbachia pipientis genomic DNA from infected Drosophila melanogaster and Mycobacterium tuberculosis from human blood. We identify primer sets that successfully amplify each against their backgrounds and describe a general method for using swga for arbitrary targets. In addition, we describe characteristics of primer sets that correlate with successful amplification, and present guidelines for implementation of SWGA to detect new targets. Source code and documentation are freely available on https://www.github.com/eclarke/swga . The program is implemented in Python and C and licensed under the GNU Public License. ecl@mail.med.upenn.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
EMU processing - A myth dispelled
NASA Technical Reports Server (NTRS)
Peacock, Paul R.; Wilde, Richard C.; Lutz, Glenn C.; Melgares, Michael A.
1991-01-01
The refurbishment-and-checkout 'processing' activities entailed by the Space Shuttle Extravehicular Mobility Units (EMUs) are currently significantly more modest, at 1050 man-hours, than when Space Shuttle services began (involving about 4000 man-hours). This great improvement in hardware efficiency is due to the design or modification of test rigs for simplification of procedures, as well as those procedures' standardization, in conjunction with an increase in hardware confidence which has allowed the extension of inspection, service, and testing intervals. Recent simplification of the hardware-processing sequence could reduce EMU processing requirements to 600 man-hours in the near future.
NASA Astrophysics Data System (ADS)
Zou, Yuan; Shen, Tianxing
2013-03-01
Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.
A fast one-chip event-preprocessor and sequencer for the Simbol-X Low Energy Detector
NASA Astrophysics Data System (ADS)
Schanz, T.; Tenzer, C.; Maier, D.; Kendziorra, E.; Santangelo, A.
2010-12-01
We present an FPGA-based digital camera electronics consisting of an Event-Preprocessor (EPP) for on-board data preprocessing and a related Sequencer (SEQ) to generate the necessary signals to control the readout of the detector. The device has been originally designed for the Simbol-X low energy detector (LED). The EPP operates on 64×64 pixel images and has a real-time processing capability of more than 8000 frames per second. The already working releases of the EPP and the SEQ are now combined into one Digital-Camera-Controller-Chip (D3C).
Exploring the repeat protein universe through computational protein design
Brunette, TJ; Parmeggiani, Fabio; Huang, Po-Ssu; ...
2015-12-16
A central question in protein evolution is the extent to which naturally occurring proteins sample the space of folded structures accessible to the polypeptide chain. Repeat proteins composed of multiple tandem copies of a modular structure unit are widespread in nature and have critical roles in molecular recognition, signalling, and other essential biological processes. Naturally occurring repeat proteins have been re-engineered for molecular recognition and modular scaffolding applications. In this paper, we use computational protein design to investigate the space of folded structures that can be generated by tandem repeating a simple helix–loop–helix–loop structural motif. Eighty-three designs with sequences unrelatedmore » to known repeat proteins were experimentally characterized. Of these, 53 are monomeric and stable at 95 °C, and 43 have solution X-ray scattering spectra consistent with the design models. Crystal structures of 15 designs spanning a broad range of curvatures are in close agreement with the design models with root mean square deviations ranging from 0.7 to 2.5 Å. Finally, our results show that existing repeat proteins occupy only a small fraction of the possible repeat protein sequence and structure space and that it is possible to design novel repeat proteins with precisely specified geometries, opening up a wide array of new possibilities for biomolecular engineering.« less
A Computer-Based Instructional Support Network: Design, Development, and Evaluation
1990-09-01
mail systems, the ISN would be relatively easy to learn and use. Survey Related Projects A literature review was conducted using the Manpower and... the impact of the ISN upon student attrition, performance, and attitudes . The context of the evaluation was a sequence of NPS Continuing Education (CE...waiLed for the subject to complete the task alone. Process Measures The following forms of process data ( relating to evaluation objectives I and 2) were
Design of multi-phase dynamic chemical networks
NASA Astrophysics Data System (ADS)
Chen, Chenrui; Tan, Junjun; Hsieh, Ming-Chien; Pan, Ting; Goodwin, Jay T.; Mehta, Anil K.; Grover, Martha A.; Lynn, David G.
2017-08-01
Template-directed polymerization reactions enable the accurate storage and processing of nature's biopolymer information. This mutualistic relationship of nucleic acids and proteins, a network known as life's central dogma, is now marvellously complex, and the progressive steps necessary for creating the initial sequence and chain-length-specific polymer templates are lost to time. Here we design and construct dynamic polymerization networks that exploit metastable prion cross-β phases. Mixed-phase environments have been used for constructing synthetic polymers, but these dynamic phases emerge naturally from the growing peptide oligomers and create environments suitable both to nucleate assembly and select for ordered templates. The resulting templates direct the amplification of a phase containing only chain-length-specific peptide-like oligomers. Such multi-phase biopolymer dynamics reveal pathways for the emergence, self-selection and amplification of chain-length- and possibly sequence-specific biopolymers.
Artificially Engineered Protein Polymers.
Yang, Yun Jung; Holmberg, Angela L; Olsen, Bradley D
2017-06-07
Modern polymer science increasingly requires precise control over macromolecular structure and properties for engineering advanced materials and biomedical systems. The application of biological processes to design and synthesize artificial protein polymers offers a means for furthering macromolecular tunability, enabling polymers with dispersities of ∼1.0 and monomer-level sequence control. Taking inspiration from materials evolved in nature, scientists have created modular building blocks with simplified monomer sequences that replicate the function of natural systems. The corresponding protein engineering toolbox has enabled the systematic development of complex functional polymeric materials across areas as diverse as adhesives, responsive polymers, and medical materials. This review discusses the natural proteins that have inspired the development of key building blocks for protein polymer engineering and the function of these elements in material design. The prospects and progress for scalable commercialization of protein polymers are reviewed, discussing both technology needs and opportunities.
RBT-GA: a novel metaheuristic for solving the multiple sequence alignment problem
Taheri, Javid; Zomaya, Albert Y
2009-01-01
Background Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. Results This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. Conclusion RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences. PMID:19594869
Brownstein, Catherine A; Beggs, Alan H; Homer, Nils; Merriman, Barry; Yu, Timothy W; Flannery, Katherine C; DeChene, Elizabeth T; Towne, Meghan C; Savage, Sarah K; Price, Emily N; Holm, Ingrid A; Luquette, Lovelace J; Lyon, Elaine; Majzoub, Joseph; Neupert, Peter; McCallie, David; Szolovits, Peter; Willard, Huntington F; Mendelsohn, Nancy J; Temme, Renee; Finkel, Richard S; Yum, Sabrina W; Medne, Livija; Sunyaev, Shamil R; Adzhubey, Ivan; Cassa, Christopher A; de Bakker, Paul I W; Duzkale, Hatice; Dworzyński, Piotr; Fairbrother, William; Francioli, Laurent; Funke, Birgit H; Giovanni, Monica A; Handsaker, Robert E; Lage, Kasper; Lebo, Matthew S; Lek, Monkol; Leshchiner, Ignaty; MacArthur, Daniel G; McLaughlin, Heather M; Murray, Michael F; Pers, Tune H; Polak, Paz P; Raychaudhuri, Soumya; Rehm, Heidi L; Soemedi, Rachel; Stitziel, Nathan O; Vestecka, Sara; Supper, Jochen; Gugenmus, Claudia; Klocke, Bernward; Hahn, Alexander; Schubach, Max; Menzel, Mortiz; Biskup, Saskia; Freisinger, Peter; Deng, Mario; Braun, Martin; Perner, Sven; Smith, Richard J H; Andorf, Janeen L; Huang, Jian; Ryckman, Kelli; Sheffield, Val C; Stone, Edwin M; Bair, Thomas; Black-Ziegelbein, E Ann; Braun, Terry A; Darbro, Benjamin; DeLuca, Adam P; Kolbe, Diana L; Scheetz, Todd E; Shearer, Aiden E; Sompallae, Rama; Wang, Kai; Bassuk, Alexander G; Edens, Erik; Mathews, Katherine; Moore, Steven A; Shchelochkov, Oleg A; Trapane, Pamela; Bossler, Aaron; Campbell, Colleen A; Heusel, Jonathan W; Kwitek, Anne; Maga, Tara; Panzer, Karin; Wassink, Thomas; Van Daele, Douglas; Azaiez, Hela; Booth, Kevin; Meyer, Nic; Segal, Michael M; Williams, Marc S; Tromp, Gerard; White, Peter; Corsmeier, Donald; Fitzgerald-Butt, Sara; Herman, Gail; Lamb-Thrush, Devon; McBride, Kim L; Newsom, David; Pierson, Christopher R; Rakowsky, Alexander T; Maver, Aleš; Lovrečić, Luca; Palandačić, Anja; Peterlin, Borut; Torkamani, Ali; Wedell, Anna; Huss, Mikael; Alexeyenko, Andrey; Lindvall, Jessica M; Magnusson, Måns; Nilsson, Daniel; Stranneheim, Henrik; Taylan, Fulya; Gilissen, Christian; Hoischen, Alexander; van Bon, Bregje; Yntema, Helger; Nelen, Marcel; Zhang, Weidong; Sager, Jason; Zhang, Lu; Blair, Kathryn; Kural, Deniz; Cariaso, Michael; Lennon, Greg G; Javed, Asif; Agrawal, Saloni; Ng, Pauline C; Sandhu, Komal S; Krishna, Shuba; Veeramachaneni, Vamsi; Isakov, Ofer; Halperin, Eran; Friedman, Eitan; Shomron, Noam; Glusman, Gustavo; Roach, Jared C; Caballero, Juan; Cox, Hannah C; Mauldin, Denise; Ament, Seth A; Rowen, Lee; Richards, Daniel R; San Lucas, F Anthony; Gonzalez-Garay, Manuel L; Caskey, C Thomas; Bai, Yu; Huang, Ying; Fang, Fang; Zhang, Yan; Wang, Zhengyuan; Barrera, Jorge; Garcia-Lobo, Juan M; González-Lamuño, Domingo; Llorca, Javier; Rodriguez, Maria C; Varela, Ignacio; Reese, Martin G; De La Vega, Francisco M; Kiruluta, Edward; Cargill, Michele; Hart, Reece K; Sorenson, Jon M; Lyon, Gholson J; Stevenson, David A; Bray, Bruce E; Moore, Barry M; Eilbeck, Karen; Yandell, Mark; Zhao, Hongyu; Hou, Lin; Chen, Xiaowei; Yan, Xiting; Chen, Mengjie; Li, Cong; Yang, Can; Gunel, Murat; Li, Peining; Kong, Yong; Alexander, Austin C; Albertyn, Zayed I; Boycott, Kym M; Bulman, Dennis E; Gordon, Paul M K; Innes, A Micheil; Knoppers, Bartha M; Majewski, Jacek; Marshall, Christian R; Parboosingh, Jillian S; Sawyer, Sarah L; Samuels, Mark E; Schwartzentruber, Jeremy; Kohane, Isaac S; Margulies, David M
2014-03-25
There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups.
2014-01-01
Background There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing platform vendors. The challenge was to analyze and interpret these data with the goals of identifying disease-causing variants and reporting the findings in a clinically useful format. Participating contestant groups were solicited broadly, and an independent panel of judges evaluated their performance. Results A total of 30 international groups were engaged. The entries reveal a general convergence of practices on most elements of the analysis and interpretation process. However, even given this commonality of approach, only two groups identified the consensus candidate variants in all disease cases, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. Conclusions The CLARITY Challenge provides a comprehensive assessment of current practices for using genome sequencing to diagnose and report genetic diseases. There is remarkable convergence in bioinformatic techniques, but medical interpretation and reporting are areas that require further development by many groups. PMID:24667040
How the Sequence of a Gene Specifies Structural Symmetry in Proteins
Shen, Xiaojuan; Huang, Tongcheng; Wang, Guanyu; Li, Guanglin
2015-01-01
Internal symmetry is commonly observed in the majority of fundamental protein folds. Meanwhile, sufficient evidence suggests that nascent polypeptide chains of proteins have the potential to start the co-translational folding process and this process allows mRNA to contain additional information on protein structure. In this paper, we study the relationship between gene sequences and protein structures from the viewpoint of symmetry to explore how gene sequences code for structural symmetry in proteins. We found that, for a set of two-fold symmetric proteins from left-handed beta-helix fold, intragenic symmetry always exists in their corresponding gene sequences. Meanwhile, codon usage bias and local mRNA structure might be involved in modulating translation speed for the formation of structural symmetry: a major decrease of local codon usage bias in the middle of the codon sequence can be identified as a common feature; and major or consecutive decreases in local mRNA folding energy near the boundaries of the symmetric substructures can also be observed. The results suggest that gene duplication and fusion may be an evolutionarily conserved process for this protein fold. In addition, the usage of rare codons and the formation of higher order of secondary structure near the boundaries of symmetric substructures might have coevolved as conserved mechanisms to slow down translation elongation and to facilitate effective folding of symmetric substructures. These findings provide valuable insights into our understanding of the mechanisms of translation and its evolution, as well as the design of proteins via symmetric modules. PMID:26641668
Effect of Automatic Processing on Specification of Problem Solutions for Computer Programs.
1981-03-01
Number 7 ± 2" item limitaion on human short-term memory capability (Miller, 1956) should be a guiding principle in program design. Yourdon and...input either a single example solution or multiple exam’- le solutions in sequence. If a participant’s P1 has a low value - near 0 - it may be concluded... Principles in Experimental Design, Winer ,1971). 55 Table 12 ANOVA Resultt, For Performance Measure 2 Sb DF MS F Source of Variation Between Subjects
A Systems Engineering Capability Maturity Model, Version 1.1,
1995-11-01
of a sequence of actions to be taken to perform a given task. [SECMM] 1. A set of activities ( ISO 12207 ). 2. A set of practices that address the...standards One of the design goals of the SE-CMM effort was to capture the salient concepts from emerging standards and initiatives (e.g.; ISO 9001...history for the SE-CMM: Version Designator Content Change Notes Release 1 • architecture rationale • Process Areas • ISO (SPICE) BPG 0.05 summary
Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M
2013-08-01
An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kyro, Kelly; Manandhar, Surya P.; Mullen, Daniel; Schmidt, Walter K.; Distefano, Mark D.
2012-01-01
Rce1p catalyzes the proteolytic trimming of C-terminal tripeptides from isoprenylated proteins containing CAAX-box sequences. Because Rce1p processing is a necessary component in the Ras pathway of oncogenic signal transduction, Rce1p holds promise as a potential target for therapeutic intervention. However, its mechanism of proteolysis and active site have yet to be defined. Here, we describe synthetic peptide analogues that mimic the natural lipidated Rce1p substrate and incorporate photolabile groups for photoaffinity-labeling applications. These photoactive peptides are designed to crosslink to residues in or near the Rce1p active site. By incorporating the photoactive group via p-benzoyl-L-phenylalanine (Bpa) residues directly into the peptide substrate sequence, the labeling efficiency was substantially increased relative to a previously-synthesized compound. Incorporation of biotin on the N-terminus of the peptides permitted photolabeled Rce1p to be isolated via streptavidin affinity capture. Our findings further suggest that residues outside the CAAX-box sequence are in contact with Rce1p, which has implications for future inhibitor design. PMID:22079863
Designable and dynamic single-walled stiff nanotubes assembled from sequence-defined peptoids
Jin, Haibao; Ding, Yan-Huai; Wang, Mingming; ...
2018-01-18
Despite recent advances in assembly of organic nanotubes, conferral of sequence-defined engineering and dynamic response characteristics to the tubules remains a challenge. Here we report a new family of highly-designable and dynamic single-walled nanotubes assembled from sequence-defined peptoids through a unique “rolling-up and closure of nanosheet” mechanism. During the assembly process, amorphous spherical particles of amphiphilic peptoid oligomers (APOs) crystallized to form well-defined nanosheets which were then folded to form single-walled peptoid nanotubes (SW-PNTs). These SW-PNTs undergo a pH-triggered, reversible contraction-expansion motion. By varying the number of hydrophobic residues of APOs, we demonstrate the tuning of PNT wall thickness andmore » diameter, and mechanical properties. AFM-based mechanical measurements indicate that PNTs are highly stiff (Young’s Modulus ~13-17 GPa), comparable to the stiffest known biological materials. We further demonstrate that the precise incorporation of functional groups within PNTs and the application of functional PNTs in water decontamination. We believe these SW-PNTs can provide a robust platform for development of biomimetic materials tailored to specific applications.« less
Designable and dynamic single-walled stiff nanotubes assembled from sequence-defined peptoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Haibao; Ding, Yan-Huai; Wang, Mingming
Despite recent advances in assembly of organic nanotubes, conferral of sequence-defined engineering and dynamic response characteristics to the tubules remains a challenge. Here we report a new family of highly-designable and dynamic single-walled nanotubes assembled from sequence-defined peptoids through a unique “rolling-up and closure of nanosheet” mechanism. During the assembly process, amorphous spherical particles of amphiphilic peptoid oligomers (APOs) crystallized to form well-defined nanosheets which were then folded to form single-walled peptoid nanotubes (SW-PNTs). These SW-PNTs undergo a pH-triggered, reversible contraction-expansion motion. By varying the number of hydrophobic residues of APOs, we demonstrate the tuning of PNT wall thickness andmore » diameter, and mechanical properties. AFM-based mechanical measurements indicate that PNTs are highly stiff (Young’s Modulus ~13-17 GPa), comparable to the stiffest known biological materials. We further demonstrate that the precise incorporation of functional groups within PNTs and the application of functional PNTs in water decontamination. We believe these SW-PNTs can provide a robust platform for development of biomimetic materials tailored to specific applications.« less
Controlling the Surface Chemistry of Graphite by Engineered Self-Assembled Peptides
Khatayevich, Dmitriy; So, Christopher R.; Hayamizu, Yuhei; Gresswell, Carolyn; Sarikaya, Mehmet
2012-01-01
The systematic control over surface chemistry is a long-standing challenge in biomedical and nanotechnological applications for graphitic materials. As a novel approach, we utilize graphite-binding dodecapeptides that self-assemble into dense domains to form monolayer thick long-range ordered films on graphite. Specifically, the peptides are rationally designed through their amino acid sequences to predictably display hydrophilic and hydrophobic characteristics while maintaining their self-assembly capabilities on the solid substrate. The peptides are observed to maintain a high tolerance for sequence modification, allowing the control over surface chemistry via their amino acid sequence. Furthermore, through a single step co-assembly of two different designed peptides, we predictably and precisely tune the wettability of the resulting functionalized graphite surfaces from 44 to 83 degrees. The modular molecular structures and predictable behavior of short peptides demonstrated here give rise to a novel platform for functionalizing graphitic materials that offers numerous advantages, including non-invasive modification of the substrate, bio-compatible processing in an aqueous environment, and simple fusion with other functional biological molecules. PMID:22428620
Improved Modeling of Side-Chain–Base Interactions and Plasticity in Protein–DNA Interface Design
Thyme, Summer B.; Baker, David; Bradley, Philip
2012-01-01
Combinatorial sequence optimization for protein design requires libraries of discrete side-chain conformations. The discreteness of these libraries is problematic, particularly for long, polar side chains, since favorable interactions can be missed. Previously, an approach to loop remodeling where protein backbone movement is directed by side-chain rotamers predicted to form interactions previously observed in native complexes (termed “motifs”) was described. Here, we show how such motif libraries can be incorporated into combinatorial sequence optimization protocols and improve native complex recapitulation. Guided by the motif rotamer searches, we made improvements to the underlying energy function, increasing recapitulation of native interactions. To further test the methods, we carried out a comprehensive experimental scan of amino acid preferences in the I-AniI protein–DNA interface and found that many positions tolerated multiple amino acids. This sequence plasticity is not observed in the computational results because of the fixed-backbone approximation of the model. We improved modeling of this diversity by introducing DNA flexibility and reducing the convergence of the simulated annealing algorithm that drives the design process. In addition to serving as a benchmark, this extensive experimental data set provides insight into the types of interactions essential to maintain the function of this potential gene therapy reagent. PMID:22426128
Improved modeling of side-chain--base interactions and plasticity in protein--DNA interface design.
Thyme, Summer B; Baker, David; Bradley, Philip
2012-06-08
Combinatorial sequence optimization for protein design requires libraries of discrete side-chain conformations. The discreteness of these libraries is problematic, particularly for long, polar side chains, since favorable interactions can be missed. Previously, an approach to loop remodeling where protein backbone movement is directed by side-chain rotamers predicted to form interactions previously observed in native complexes (termed "motifs") was described. Here, we show how such motif libraries can be incorporated into combinatorial sequence optimization protocols and improve native complex recapitulation. Guided by the motif rotamer searches, we made improvements to the underlying energy function, increasing recapitulation of native interactions. To further test the methods, we carried out a comprehensive experimental scan of amino acid preferences in the I-AniI protein-DNA interface and found that many positions tolerated multiple amino acids. This sequence plasticity is not observed in the computational results because of the fixed-backbone approximation of the model. We improved modeling of this diversity by introducing DNA flexibility and reducing the convergence of the simulated annealing algorithm that drives the design process. In addition to serving as a benchmark, this extensive experimental data set provides insight into the types of interactions essential to maintain the function of this potential gene therapy reagent. Published by Elsevier Ltd.
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
GapFiller: a de novo assembly approach to fill the gap within paired reads
2012-01-01
Background Next Generation Sequencing technologies are able to provide high genome coverages at a relatively low cost. However, due to limited reads' length (from 30 bp up to 200 bp), specific bioinformatics problems have become even more difficult to solve. De novo assembly with short reads, for example, is more complicated at least for two reasons: first, the overall amount of "noisy" data to cope with increased and, second, as the reads' length decreases the number of unsolvable repeats grows. Our work's aim is to go at the root of the problem by providing a pre-processing tool capable to produce (in-silico) longer and highly accurate sequences from a collection of Next Generation Sequencing reads. Results In this paper a seed-and-extend local assembler is presented. The kernel algorithm is a loop that, starting from a read used as seed, keeps extending it using heuristics whose main goal is to produce a collection of error-free and longer sequences. In particular, GapFiller carefully detects reliable overlaps and operates clustering similar reads in order to reconstruct the missing part between the two ends of the same insert. Our tool's output has been validated on 24 experiments using both simulated and real paired reads datasets. The output sequences are declared correct when the seed-mate is found. In the experiments performed, GapFiller was able to extend high percentages of the processed seeds and find their mates, with a false positives rate that turned out to be nearly negligible. Conclusions GapFiller, starting from a sufficiently high short reads coverage, is able to produce high coverages of accurate longer sequences (from 300 bp up to 3500 bp). The procedure to perform safe extensions, together with the mate-found check, turned out to be a powerful criterion to guarantee contigs' correctness. GapFiller has further potential, as it could be applied in a number of different scenarios, including the post-processing validation of insertions/deletions detection pipelines, pre-processing routines on datasets for de novo assembly pipelines, or in any hierarchical approach designed to assemble, analyse or validate pools of sequences. PMID:23095524
A multiple-alignment based primer design algorithm for genetically highly variable DNA targets
2013-01-01
Background Primer design for highly variable DNA sequences is difficult, and experimental success requires attention to many interacting constraints. The advent of next-generation sequencing methods allows the investigation of rare variants otherwise hidden deep in large populations, but requires attention to population diversity and primer localization in relatively conserved regions, in addition to recognized constraints typically considered in primer design. Results Design constraints include degenerate sites to maximize population coverage, matching of melting temperatures, optimizing de novo sequence length, finding optimal bio-barcodes to allow efficient downstream analyses, and minimizing risk of dimerization. To facilitate primer design addressing these and other constraints, we created a novel computer program (PrimerDesign) that automates this complex procedure. We show its powers and limitations and give examples of successful designs for the analysis of HIV-1 populations. Conclusions PrimerDesign is useful for researchers who want to design DNA primers and probes for analyzing highly variable DNA populations. It can be used to design primers for PCR, RT-PCR, Sanger sequencing, next-generation sequencing, and other experimental protocols targeting highly variable DNA samples. PMID:23965160
Ordering Design Tasks Based on Coupling Strengths
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Bloebaum, C. L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Ordering design tasks based on coupling strengths
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Bloebaum, Christina L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
De Novo Design of Skin-Penetrating Peptides for Enhanced Transdermal Delivery of Peptide Drugs.
Menegatti, Stefano; Zakrewsky, Michael; Kumar, Sunny; De Oliveira, Joshua Sanchez; Muraski, John A; Mitragotri, Samir
2016-03-09
Skin-penetrating peptides (SPPs) are attracting increasing attention as a non-invasive strategy for transdermal delivery of therapeutics. The identification of SPP sequences, however, currently performed by experimental screening of peptide libraries, is very laborious. Recent studies have shown that, to be effective enhancers, SPPs must possess affinity for both skin keratin and the drug of interest. We therefore developed a computational process for generating and screening virtual libraries of disulfide-cyclic peptides against keratin and cyclosporine A (CsA) to identify SPPs capable of enhancing transdermal CsA delivery. The selected sequences were experimentally tested and found to bind both CsA and keratin, as determined by mass spectrometry and affinity chromatography, and enhance transdermal permeation of CsA. Four heptameric sequences that emerged as leading candidates (ACSATLQHSCG, ACSLTVNWNCG, ACTSTGRNACG, and ACSASTNHNCG) were tested and yielded CsA permeation on par with previously identified SPP SPACE (TM) . An octameric peptide (ACNAHQARSTCG) yielded significantly higher delivery of CsA compared to heptameric SPPs. The safety profile of the selected sequences was also validated by incubation with skin keratinocytes. This method thus represents an effective procedure for the de novo design of skin-penetrating peptides for the delivery of desired therapeutic or cosmetic agents. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, Shikai; Zhang, Jiaren; Yao, Jun; Liu, Zhanjiang
2016-05-01
The complete mitochondrial genome of the armored catfish, Hypostomus plecostomus, was determined by next generation sequencing of genomic DNA without prior sample processing or primer design. Bioinformatics analysis resulted in the entire mitochondrial genome sequence with length of 16,523 bp. The H. plecostomus mitochondrial genome is consisted of 13 protein-coding genes, 22 tRNA genes, 2 rRNA genes, and 1 control region, showing typical circular molecule structure of mitochondrial genome as in other vertebrates. The whole genome base composition was estimated to be 31.8% A, 27.0% T, 14.6% G, and 26.6% C, with A/T bias of 58.8%. This work provided the H. plecostomus mitochondrial genome sequence which should be valuable for species identification, phylogenetic analysis and conservation genetics studies in catfishes.
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Domain-general sequence learning deficit in specific language impairment.
Lukács, Agnes; Kemény, Ferenc
2014-05-01
Grammar-specific accounts of specific language impairment (SLI) have been challenged by recent claims that language problems are a consequence of impairments in domain-general mechanisms of learning that also play a key role in the process of language acquisition. Our studies were designed to test the generality and nature of this learning deficit by focusing on both sequential and nonsequential, and on verbal and nonverbal, domains. Twenty-nine children with SLI were compared with age-matched typically developing (TD) control children using (a) a serial reaction time task (SRT), testing the learning of motor sequences; (b) an artificial grammar learning (AGL) task, testing the extraction of regularities from auditory sequences; and (c) a weather prediction task (WP), testing probabilistic category learning in a nonsequential task. For the 2 sequence learning tasks, a significantly smaller proportion of children showed evidence of learning in the SLI than in the TD group (χ2 tests, p < .001 for the SRT task, p < .05 for the AGL task), whereas the proportion of learners on the WP task was the same in the 2 groups. The level of learning for SLI learners was comparable with that of TD children on all tasks (with great individual variation). Taken together, these findings suggest that domain-general processes of implicit sequence learning tend to be impaired in SLI. Further research is needed to clarify the relationship of deficits in implicit learning and language.
Shahinyan, Grigor; Margaryan, Armine; Panosyan, Hovik; Trchounian, Armen
2017-05-02
Among the huge diversity of thermophilic bacteria mainly bacilli have been reported as active thermostable lipase producers. Geothermal springs serve as the main source for isolation of thermostable lipase producing bacilli. Thermostable lipolytic enzymes, functioning in the harsh conditions, have promising applications in processing of organic chemicals, detergent formulation, synthesis of biosurfactants, pharmaceutical processing etc. In order to study the distribution of lipase-producing thermophilic bacilli and their specific lipase protein primary structures, three lipase producers from different genera were isolated from mesothermal (27.5-70 °C) springs distributed on the territory of Armenia and Nagorno Karabakh. Based on phenotypic characteristics and 16S rRNA gene sequencing the isolates were identified as Geobacillus sp., Bacillus licheniformis and Anoxibacillus flavithermus strains. The lipase genes of isolates were sequenced by using initially designed primer sets. Multiple alignments generated from primary structures of the lipase proteins and annotated lipase protein sequences, conserved regions analysis and amino acid composition have illustrated the similarity (98-99%) of the lipases with true lipases (family I) and GDSL esterase family (family II). A conserved sequence block that determines the thermostability has been identified in the multiple alignments of the lipase proteins. The results are spreading light on the lipase producing bacilli distribution in geothermal springs in Armenia and Nagorno Karabakh. Newly isolated bacilli strains could be prospective source for thermostable lipases and their genes.
Simulation-Based Evaluation of Learning Sequences for Instructional Technologies
ERIC Educational Resources Information Center
McEneaney, John E.
2016-01-01
Instructional technologies critically depend on systematic design, and learning hierarchies are a commonly advocated tool for designing instructional sequences. But hierarchies routinely allow numerous sequences and choosing an optimal sequence remains an unsolved problem. This study explores a simulation-based approach to modeling learning…
AMPS data management concepts. [Atmospheric, Magnetospheric and Plasma in Space experiment
NASA Technical Reports Server (NTRS)
Metzelaar, P. N.
1975-01-01
Five typical AMPS experiments were formulated to allow simulation studies to verify data management concepts. Design studies were conducted to analyze these experiments in terms of the applicable procedures, data processing and displaying functions. Design concepts for AMPS data management system are presented which permit both automatic repetitive measurement sequences and experimenter-controlled step-by-step procedures. Extensive use is made of a cathode ray tube display, the experimenters' alphanumeric keyboard, and the computer. The types of computer software required by the system and the possible choices of control and display procedures available to the experimenter are described for several examples. An electromagnetic wave transmission experiment illustrates the methods used to analyze data processing requirements.
Language Classification using N-grams Accelerated by FPGA-based Bloom Filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, A; Gokhale, M
N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.
Bashir, Ali; Bansal, Vikas; Bafna, Vineet
2010-06-18
Massively parallel DNA sequencing technologies have enabled the sequencing of several individual human genomes. These technologies are also being used in novel ways for mRNA expression profiling, genome-wide discovery of transcription-factor binding sites, small RNA discovery, etc. The multitude of sequencing platforms, each with their unique characteristics, pose a number of design challenges, regarding the technology to be used and the depth of sequencing required for a particular sequencing application. Here we describe a number of analytical and empirical results to address design questions for two applications: detection of structural variations from paired-end sequencing and estimating mRNA transcript abundance. For structural variation, our results provide explicit trade-offs between the detection and resolution of rearrangement breakpoints, and the optimal mix of paired-read insert lengths. Specifically, we prove that optimal detection and resolution of breakpoints is achieved using a mix of exactly two insert library lengths. Furthermore, we derive explicit formulae to determine these insert length combinations, enabling a 15% improvement in breakpoint detection at the same experimental cost. On empirical short read data, these predictions show good concordance with Illumina 200 bp and 2 Kbp insert length libraries. For transcriptome sequencing, we determine the sequencing depth needed to detect rare transcripts from a small pilot study. With only 1 Million reads, we derive corrections that enable almost perfect prediction of the underlying expression probability distribution, and use this to predict the sequencing depth required to detect low expressed genes with greater than 95% probability. Together, our results form a generic framework for many design considerations related to high-throughput sequencing. We provide software tools http://bix.ucsd.edu/projects/NGS-DesignTools to derive platform independent guidelines for designing sequencing experiments (amount of sequencing, choice of insert length, mix of libraries) for novel applications of next generation sequencing.
An efficient annotation and gene-expression derivation tool for Illumina Solexa datasets.
Hosseini, Parsa; Tremblay, Arianne; Matthews, Benjamin F; Alkharouf, Nadim W
2010-07-02
The data produced by an Illumina flow cell with all eight lanes occupied, produces well over a terabyte worth of images with gigabytes of reads following sequence alignment. The ability to translate such reads into meaningful annotation is therefore of great concern and importance. Very easily, one can get flooded with such a great volume of textual, unannotated data irrespective of read quality or size. CASAVA, a optional analysis tool for Illumina sequencing experiments, enables the ability to understand INDEL detection, SNP information, and allele calling. To not only extract from such analysis, a measure of gene expression in the form of tag-counts, but furthermore to annotate such reads is therefore of significant value. We developed TASE (Tag counting and Analysis of Solexa Experiments), a rapid tag-counting and annotation software tool specifically designed for Illumina CASAVA sequencing datasets. Developed in Java and deployed using jTDS JDBC driver and a SQL Server backend, TASE provides an extremely fast means of calculating gene expression through tag-counts while annotating sequenced reads with the gene's presumed function, from any given CASAVA-build. Such a build is generated for both DNA and RNA sequencing. Analysis is broken into two distinct components: DNA sequence or read concatenation, followed by tag-counting and annotation. The end result produces output containing the homology-based functional annotation and respective gene expression measure signifying how many times sequenced reads were found within the genomic ranges of functional annotations. TASE is a powerful tool to facilitate the process of annotating a given Illumina Solexa sequencing dataset. Our results indicate that both homology-based annotation and tag-count analysis are achieved in very efficient times, providing researchers to delve deep in a given CASAVA-build and maximize information extraction from a sequencing dataset. TASE is specially designed to translate sequence data in a CASAVA-build into functional annotations while producing corresponding gene expression measurements. Achieving such analysis is executed in an ultrafast and highly efficient manner, whether the analysis be a single-read or paired-end sequencing experiment. TASE is a user-friendly and freely available application, allowing rapid analysis and annotation of any given Illumina Solexa sequencing dataset with ease.
GSP: A web-based platform for designing genome-specific primers in polyploids
USDA-ARS?s Scientific Manuscript database
The sequences among subgenomes in a polyploid species have high similarity. This makes difficult to design genome-specific primers for sequence analysis. We present a web-based platform named GSP for designing genome-specific primers to distinguish subgenome sequences in the polyploid genome backgr...
ERIC Educational Resources Information Center
Heath, Steve M.; Hogben, John H.
2004-01-01
Background: Claims that children with reading and oral language deficits have impaired perception of sequential sounds are usually based on psychophysical measures of auditory temporal processing (ATP) designed to characterise group performance. If we are to use these measures (e.g., the Tallal, 1980, Repetition Test) as the basis for intervention…
Supporting Teachers' Use of Research-Based Instructional Sequences
ERIC Educational Resources Information Center
Cobb, Paul; Jackson, Kara
2015-01-01
In this paper, we frame the dissemination of the products of classroom design studies as a process of supporting the learning of large numbers of teachers. We argue that high-quality pull-out professional development is essential but not sufficient, and go on to consider teacher collaboration and one-on-one coaching in the classroom as additional…
ERIC Educational Resources Information Center
Shantz, Kailen
2017-01-01
This study reports on a self-paced reading experiment in which native and non-native speakers of English read sentences designed to evaluate the predictions of usage-based and rule-based approaches to second language acquisition (SLA). Critical stimuli were four-word sequences embedded into sentences in which phrase frequency and grammaticality…
Sheu, Shyang-Chwen; Tsou, Po-Chuan; Lien, Yi-Yang; Lee, Meng-Shiou
2018-08-15
Peanut is a widely and common used in many cuisines around the world. However, peanut is also one of the most important food allergen for causing anaphylactic reaction. To prevent allergic reaction, the best way is to avoid the food allergen or food containing allergic ingredient such as peanut before food consuming. Thus, to efficient and precisely detect the allergic ingredient, peanut or related product, is essential and required for maintain consumer's health or their interest. In this study, a loop-mediated isothermal amplification (LAMP) assay was developed for the detection of allergic peanut using specifically designed primer sets. Two sets of the specific LAMP primers respectively targeted the internal transcribed sequence 1 (ITS1) of nuclear ribosomal DNA sequence regions and the ara h1 gene sequence of Arachia hypogeae (peanut) were used to address the application of LAMP for detecting peanut in processed food or diet. The results demonstrated that the identification of peanut using the newly designed primers for ITS 1 sequence is more sensitive rather than primers for sequence of Ara h1 gene when performing LAMP assay. Besides, the sensitivity of LAMP for detecting peanut is also higher than the traditional PCR method. These LAMP primers sets showed high specificity for the identification of the peanut and had no cross-reaction to other species of nut including walnut, hazelnut, almonds, cashew and macadamia nut. Moreover, when minimal 0.1% peanuts were mixed with other nuts ingredients at different ratios, no any cross-reactivity was evident during performing LAMP. Finally, genomic DNAs extracted from boiled and steamed peanut were used as templates; the detection of peanut by LAMP was not affected and reproducible. As to this established LAMP herein, not only can peanut ingredients be detected but commercial foods containing peanut can also be identified. This assay will be useful and potential for the rapid detection of peanut in practical food markets. Copyright © 2018 Elsevier Ltd. All rights reserved.
Molecular beacon sequence design algorithm.
Monroe, W Todd; Haselton, Frederick R
2003-01-01
A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.
NASA Astrophysics Data System (ADS)
van Buren, Simon; Hertle, Ellen; Figueiredo, Patric; Kneer, Reinhold; Rohlfs, Wilko
2017-11-01
Frost formation is a common, often undesired phenomenon in heat exchanges such as air coolers. Thus, air coolers have to be defrosted periodically, causing significant energy consumption. For the design and optimization, prediction of defrosting by a CFD tool is desired. This paper presents a one-dimensional transient model approach suitable to be used as a zero-dimensional wall-function in CFD for modeling the defrost process at the fin and tube interfaces. In accordance to previous work a multi stage defrost model is introduced (e.g. [1, 2]). In the first instance the multi stage model is implemented and validated using MATLAB. The defrost process of a one-dimensional frost segment is investigated. Fixed boundary conditions are provided at the frost interfaces. The simulation results verify the plausibility of the designed model. The evaluation of the simulated defrost process shows the expected convergent behavior of the three-stage sequence.
Sensible use of antisense: how to use oligonucleotides as research tools.
Myers, K J; Dean, N M
2000-01-01
In the past decade, there has been a vast increase in the amount of gene sequence information that has the potential to revolutionize the way diseases are both categorized and treated. Old diagnoses, largely anatomical or descriptive in nature, are likely to be superceded by the molecular characterization of the disease. The recognition that certain genes drive key disease processes will also enable the rational design of gene-specific therapeutics. Antisense oligonucleotides represent a technology that should play multiple roles in this process.
The Development of a Course Sequence in Real-Time Systems Design
1993-08-01
project was implemented in C. 3 A group of students used the material learned in this course in their...homework assignments are used to assess the students learning U process. The term project is to be done in teams of 2 to 4 students and it starts very...assignments are used to assess the students learning I process. The term project is to be done in teams of 2 to 4 students and it starts very early in the
Anaerobic sequencing batch reactors for wastewater treatment: a developing technology.
Zaiat, M; Rodrigues, J A; Ratusznei, S M; de Camargo, E F; Borzani, W
2001-01-01
This paper describes and discusses the main problems related to anaerobic batch and fed-batch processes for wastewater treatment. A critical analysis of the literature evaluated the industrial application viability and proposed alternatives to improve operation and control of this system. Two approaches were presented in order to make this anaerobic discontinuous process feasible for industrial application: (1) optimization of the operating procedures in reactors containing self-immobilized sludge as granules, and (2) design of bioreactors with inert support media for biomass immobilization.
Lakin, Matthew R.; Brown, Carl W.; Horwitz, Eli K.; Fanning, M. Leigh; West, Hannah E.; Stefanovic, Darko; Graves, Steven W.
2014-01-01
The development of large-scale molecular computational networks is a promising approach to implementing logical decision making at the nanoscale, analogous to cellular signaling and regulatory cascades. DNA strands with catalytic activity (DNAzymes) are one means of systematically constructing molecular computation networks with inherent signal amplification. Linking multiple DNAzymes into a computational circuit requires the design of substrate molecules that allow a signal to be passed from one DNAzyme to another through programmed biochemical interactions. In this paper, we chronicle an iterative design process guided by biophysical and kinetic constraints on the desired reaction pathways and use the resulting substrate design to implement heterogeneous DNAzyme signaling cascades. A key aspect of our design process is the use of secondary structure in the substrate molecule to sequester a downstream effector sequence prior to cleavage by an upstream DNAzyme. Our goal was to develop a concrete substrate molecule design to achieve efficient signal propagation with maximal activation and minimal leakage. We have previously employed the resulting design to develop high-performance DNAzyme-based signaling systems with applications in pathogen detection and autonomous theranostics. PMID:25347066
Using Dissimilarity Metrics to Identify Interesting Designs
NASA Technical Reports Server (NTRS)
Feather, Martin; Kiper, James
2006-01-01
A computer program helps to blend the power of automated-search software, which is able to generate large numbers of design solutions, with the insight of expert designers, who are able to identify preferred designs but do not have time to examine all the solutions. From among the many automated solutions to a given design problem, the program selects a smaller number of solutions that are worthy of scrutiny by the experts in the sense that they are sufficiently dissimilar from each other. The program makes the selection in an interactive process that involves a sequence of data-mining steps interspersed with visual displays of results of these steps to the experts. At crucial points between steps, the experts provide directives to guide the process. The program uses heuristic search techniques to identify nearly optimal design solutions and uses dissimilarity metrics defined by the experts to characterize the degree to which solutions are interestingly different. The search, data-mining, and visualization features of the program were derived from previously developed risk-management software used to support a risk-centric design methodology
NASA Astrophysics Data System (ADS)
Shuster, W.; Schifman, L. A.; Herrmann, D.
2017-12-01
Green infrastructure represents a broad set of site- to landscape-scale practices that can be flexibly implemented to increase sewershed retention capacity, and can thereby improve on the management of water quantity and quality. Although much green infrastructure presents as formal engineered designs, urbanized landscapes with highly-interspersed pervious surfaces (e.g., right-of-way, parks, lawns, vacant land) may offer ecosystem services as passive, infiltrative green infrastructure. Yet, infiltration and drainage processes are regulated by soil surface conditions, and then the layering of subsoil horizons, respectively. Drawing on a unique urban soil taxonomic and hydrologic dataset collected in 12 cities (each city representing a major soil order), we determined how urbanization processes altered the sequence of soil horizons (compared to pre-urbanized reference soil pedons) and modeled the hydrologic implications of these shifts in layering with an unsaturated zone code (HYDRUS2D). We found that the different layering sequences in urbanized soils render different types and extents of supporting (plant-available soil water), provisioning (productive vegetation), and regulating (runoff mitigation) ecosystem services.
CloudAligner: A fast and full-featured MapReduce based tool for sequence mapping.
Nguyen, Tung; Shi, Weisong; Ruden, Douglas
2011-06-06
Research in genetics has developed rapidly recently due to the aid of next generation sequencing (NGS). However, massively-parallel NGS produces enormous amounts of data, which leads to storage, compatibility, scalability, and performance issues. The Cloud Computing and MapReduce framework, which utilizes hundreds or thousands of shared computers to map sequencing reads quickly and efficiently to reference genome sequences, appears to be a very promising solution for these issues. Consequently, it has been adopted by many organizations recently, and the initial results are very promising. However, since these are only initial steps toward this trend, the developed software does not provide adequate primary functions like bisulfite, pair-end mapping, etc., in on-site software such as RMAP or BS Seeker. In addition, existing MapReduce-based applications were not designed to process the long reads produced by the most recent second-generation and third-generation NGS instruments and, therefore, are inefficient. Last, it is difficult for a majority of biologists untrained in programming skills to use these tools because most were developed on Linux with a command line interface. To urge the trend of using Cloud technologies in genomics and prepare for advances in second- and third-generation DNA sequencing, we have built a Hadoop MapReduce-based application, CloudAligner, which achieves higher performance, covers most primary features, is more accurate, and has a user-friendly interface. It was also designed to be able to deal with long sequences. The performance gain of CloudAligner over Cloud-based counterparts (35 to 80%) mainly comes from the omission of the reduce phase. In comparison to local-based approaches, the performance gain of CloudAligner is from the partition and parallel processing of the huge reference genome as well as the reads. The source code of CloudAligner is available at http://cloudaligner.sourceforge.net/ and its web version is at http://mine.cs.wayne.edu:8080/CloudAligner/. Our results show that CloudAligner is faster than CloudBurst, provides more accurate results than RMAP, and supports various input as well as output formats. In addition, with the web-based interface, it is easier to use than its counterparts.
Guo, Chun-Teng; McClean, Stephen; Shaw, Chris; Rao, Ping-Fan; Ye, Ming-Yu; Bjourson, Anthony J
2013-05-01
One novel Kunitz BPTI-like peptide designated as BBPTI-1, with chymotrypsin inhibitory activity was identified from the venom of Burmese Daboia russelii siamensis. It was purified by three steps of chromatography including gel filtration, cation exchange and reversed phase. A partial N-terminal sequence of BBPTI-1, HDRPKFCYLPADPGECLAHMRSF was obtained by automated Edman degradation and a Ki value of 4.77nM determined. Cloning of BBPTI-1 including the open reading frame and 3' untranslated region was achieved from cDNA libraries derived from lyophilized venom using a 3' RACE strategy. In addition a cDNA sequence, designated as BBPTI-5, was also obtained. Alignment of cDNA sequences showed that BBPTI-5 exhibited an identical sequence to BBPTI-1 cDNA except for an eight nucleotide deletion in the open reading frame. Gene variations that represented deletions in the BBPTI-5 cDNA resulted in a novel protease inhibitor analog. Amino acid sequence alignment revealed that deduced peptides derived from cloning of their respective precursor cDNAs from libraries showed high similarity and homology with other Kunitz BPTI proteinase inhibitors. BBPTI-1 and BBPTI-5 consist of 60 and 66 amino acid residues respectively, including six conserved cysteine residues. As these peptides have been reported to have influence on the processes of coagulation, fibrinolysis and inflammation, their potential application in biomedical contexts warrants further investigation. Copyright © 2013 Elsevier Inc. All rights reserved.
Ho, Michelle L; Adler, Benjamin A; Torre, Michael L; Silberg, Jonathan J; Suh, Junghae
2013-12-20
Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions.
Ho, Michelle L.; Adler, Benjamin A.; Torre, Michael L.; Silberg, Jonathan J.; Suh, Junghae
2013-01-01
Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications, but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions. PMID:23899192
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Grant S.; Mills, Jeffrey L.; Miley, Michael J.
2015-10-15
Protein design tests our understanding of protein stability and structure. Successful design methods should allow the exploration of sequence space not found in nature. However, when redesigning naturally occurring protein structures, most fixed backbone design algorithms return amino acid sequences that share strong sequence identity with wild-type sequences, especially in the protein core. This behavior places a restriction on functional space that can be explored and is not consistent with observations from nature, where sequences of low identity have similar structures. Here, we allow backbone flexibility during design to mutate every position in the core (38 residues) of a four-helixmore » bundle protein. Only small perturbations to the backbone, 12 {angstrom}, were needed to entirely mutate the core. The redesigned protein, DRNN, is exceptionally stable (melting point >140C). An NMR and X-ray crystal structure show that the side chains and backbone were accurately modeled (all-atom RMSD = 1.3 {angstrom}).« less
New evidence of a rhythmic priming effect that enhances grammaticality judgments in children.
Chern, Alexander; Tillmann, Barbara; Vaughan, Chloe; Gordon, Reyna L
2018-09-01
Musical rhythm and the grammatical structure of language share a surprising number of characteristics that may be intrinsically related in child development. The current study aimed to understand the potential influence of musical rhythmic priming on subsequent spoken grammar task performance in children with typical development who were native speakers of English. Participants (ages 5-8 years) listened to rhythmically regular and irregular musical sequences (within-participants design) followed by blocks of grammatically correct and incorrect sentences upon which they were asked to perform a grammaticality judgment task. Rhythmically regular musical sequences improved performance in grammaticality judgment compared with rhythmically irregular musical sequences. No such effect of rhythmic priming was found in two nonlinguistic control tasks, suggesting a neural overlap between rhythm processing and mechanisms recruited during grammar processing. These findings build on previous research investigating the effect of rhythmic priming by extending the paradigm to a different language, testing a younger population, and employing nonlanguage control tasks. These findings of an immediate influence of rhythm on grammar states (temporarily augmented grammaticality judgment performance) also converge with previous findings of associations between rhythm and grammar traits (stable generalized grammar abilities) in children. Taken together, the results of this study provide additional evidence for shared neural processing for language and music and warrant future investigations of potentially beneficial effects of innovative musical material on language processing. Copyright © 2018 Elsevier Inc. All rights reserved.
Single helically folded aromatic oligoamides that mimic the charge surface of double-stranded B-DNA
NASA Astrophysics Data System (ADS)
Ziach, Krzysztof; Chollet, Céline; Parissi, Vincent; Prabhakaran, Panchami; Marchivie, Mathieu; Corvaglia, Valentina; Bose, Partha Pratim; Laxmi-Reddy, Katta; Godde, Frédéric; Schmitter, Jean-Marie; Chaignepain, Stéphane; Pourquier, Philippe; Huc, Ivan
2018-05-01
Numerous essential biomolecular processes require the recognition of DNA surface features by proteins. Molecules mimicking these features could potentially act as decoys and interfere with pharmacologically or therapeutically relevant protein-DNA interactions. Although naturally occurring DNA-mimicking proteins have been described, synthetic tunable molecules that mimic the charge surface of double-stranded DNA are not known. Here, we report the design, synthesis and structural characterization of aromatic oligoamides that fold into single helical conformations and display a double helical array of negatively charged residues in positions that match the phosphate moieties in B-DNA. These molecules were able to inhibit several enzymes possessing non-sequence-selective DNA-binding properties, including topoisomerase 1 and HIV-1 integrase, presumably through specific foldamer-protein interactions, whereas sequence-selective enzymes were not inhibited. Such modular and synthetically accessible DNA mimics provide a versatile platform to design novel inhibitors of protein-DNA interactions.
Mechanical System Analysis/Design Tool (MSAT) Quick Guide
NASA Technical Reports Server (NTRS)
Lee, HauHua; Kolb, Mark; Madelone, Jack
1998-01-01
MSAT is a unique multi-component multi-disciplinary tool that organizes design analysis tasks around object-oriented representations of configuration components, analysis programs and modules, and data transfer links between them. This creative modular architecture enables rapid generation of input stream for trade-off studies of various engine configurations. The data transfer links automatically transport output from one application as relevant input to the next application once the sequence is set up by the user. The computations are managed via constraint propagation - the constraints supplied by the user as part of any optimization module. The software can be used in the preliminary design stage as well as during the detail design of product development process.
Gender and the performance of music
Sergeant, Desmond C.; Himonides, Evangelos
2014-01-01
This study evaluates propositions that have appeared in the literature that music phenomena are gendered. Were they present in the musical “message,” gendered qualities might be imparted at any of three stages of the music–communication interchange: the process of composition, its realization into sound by the performer, or imposed by the listener in the process of perception. The research was designed to obtain empirical evidence to enable evaluation of claims of the presence of gendering at these three stages. Three research hypotheses were identified and relevant literature of music behaviors and perception reviewed. New instruments of measurement were constructed to test the three hypotheses: (i) two listening sequences each containing 35 extracts from published recordings of compositions of the classical music repertoire, (ii) four “music characteristics” scales, with polarities defined by verbal descriptors designed to assess the dynamic and emotional valence of the musical extracts featured in the listening sequences. 69 musically-trained listeners listened to the two sequences and were asked to identify the sex of the performing artist of each musical extract; a second group of 23 listeners evaluated the extracts applying the four music characteristics scales. Results did not support claims that music structures are inherently gendered, nor proposals that performers impart their own-sex-specific qualities to the music. It is concluded that gendered properties are imposed subjectively by the listener, and these are primarily related to the tempo of the music. PMID:24795663
Open-Phylo: a customizable crowd-computing platform for multiple sequence alignment
2013-01-01
Citizen science games such as Galaxy Zoo, Foldit, and Phylo aim to harness the intelligence and processing power generated by crowds of online gamers to solve scientific problems. However, the selection of the data to be analyzed through these games is under the exclusive control of the game designers, and so are the results produced by gamers. Here, we introduce Open-Phylo, a freely accessible crowd-computing platform that enables any scientist to enter our system and use crowds of gamers to assist computer programs in solving one of the most fundamental problems in genomics: the multiple sequence alignment problem. PMID:24148814
Next Generation Sequence Analysis and Computational Genomics Using Graphical Pipeline Workflows
Torri, Federica; Dinov, Ivo D.; Zamanyan, Alen; Hobel, Sam; Genco, Alex; Petrosyan, Petros; Clark, Andrew P.; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Knowles, James A.; Ames, Joseph; Kesselman, Carl; Toga, Arthur W.; Potkin, Steven G.; Vawter, Marquis P.; Macciardi, Fabio
2012-01-01
Whole-genome and exome sequencing have already proven to be essential and powerful methods to identify genes responsible for simple Mendelian inherited disorders. These methods can be applied to complex disorders as well, and have been adopted as one of the current mainstream approaches in population genetics. These achievements have been made possible by next generation sequencing (NGS) technologies, which require substantial bioinformatics resources to analyze the dense and complex sequence data. The huge analytical burden of data from genome sequencing might be seen as a bottleneck slowing the publication of NGS papers at this time, especially in psychiatric genetics. We review the existing methods for processing NGS data, to place into context the rationale for the design of a computational resource. We describe our method, the Graphical Pipeline for Computational Genomics (GPCG), to perform the computational steps required to analyze NGS data. The GPCG implements flexible workflows for basic sequence alignment, sequence data quality control, single nucleotide polymorphism analysis, copy number variant identification, annotation, and visualization of results. These workflows cover all the analytical steps required for NGS data, from processing the raw reads to variant calling and annotation. The current version of the pipeline is freely available at http://pipeline.loni.ucla.edu. These applications of NGS analysis may gain clinical utility in the near future (e.g., identifying miRNA signatures in diseases) when the bioinformatics approach is made feasible. Taken together, the annotation tools and strategies that have been developed to retrieve information and test hypotheses about the functional role of variants present in the human genome will help to pinpoint the genetic risk factors for psychiatric disorders. PMID:23139896
Neutrality and evolvability of designed protein sequences
NASA Astrophysics Data System (ADS)
Bhattacherjee, Arnab; Biswas, Parbati
2010-07-01
The effect of foldability on protein’s evolvability is analyzed by a two-prong approach consisting of a self-consistent mean-field theory and Monte Carlo simulations. Theory and simulation models representing protein sequences with binary patterning of amino acid residues compatible with a particular foldability criteria are used. This generalized foldability criterion is derived using the high temperature cumulant expansion approximating the free energy of folding. The effect of cumulative point mutations on these designed proteins is studied under neutral condition. The robustness, protein’s ability to tolerate random point mutations is determined with a selective pressure of stability (ΔΔG) for the theory designed sequences, which are found to be more robust than that of Monte Carlo and mean-field-biased Monte Carlo generated sequences. The results show that this foldability criterion selects viable protein sequences more effectively compared to the Monte Carlo method, which has a marked effect on how the selective pressure shapes the evolutionary sequence space. These observations may impact de novo sequence design and its applications in protein engineering.
Information processing. [in human performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Flach, John M.
1988-01-01
Theoretical models of sensory-information processing by the human brain are reviewed from a human-factors perspective, with a focus on their implications for aircraft and avionics design. The topics addressed include perception (signal detection and selection), linguistic factors in perception (context provision, logical reversals, absence of cues, and order reversals), mental models, and working and long-term memory. Particular attention is given to decision-making problems such as situation assessment, decision formulation, decision quality, selection of action, the speed-accuracy tradeoff, stimulus-response compatibility, stimulus sequencing, dual-task performance, task difficulty and structure, and factors affecting multiple task performance (processing modalities, codes, and stages).
Practical issues in implementing whole-genome-sequencing in routine diagnostic microbiology.
Rossen, J W A; Friedrich, A W; Moran-Gilad, J
2018-04-01
Next generation sequencing (NGS) is increasingly being used in clinical microbiology. Like every new technology adopted in microbiology, the integration of NGS into clinical and routine workflows must be carefully managed. To review the practical aspects of implementing bacterial whole genome sequencing (WGS) in routine diagnostic laboratories. Review of the literature and expert opinion. In this review, we discuss when and how to integrate whole genome sequencing (WGS) in the routine workflow of the clinical laboratory. In addition, as the microbiology laboratories have to adhere to various national and international regulations and criteria for their accreditation, we deliberate on quality control issues for using WGS in microbiology, including the importance of proficiency testing. Furthermore, the current and future place of this technology in the diagnostic hierarchy of microbiology is described as well as the necessity of maintaining backwards compatibility with already established methods. Finally, we speculate on the question of whether WGS can entirely replace routine microbiology in the future and the tension between the fact that most sequencers are designed to process multiple samples in parallel whereas for optimal diagnosis a one-by-one processing of the samples is preferred. Special reference is made to the cost and turnaround time of WGS in diagnostic laboratories. Further development is required to improve the workflow for WGS, in particular to shorten the turnaround time, reduce costs, and streamline downstream data analyses. Only when these processes reach maturity will reliance on WGS for routine patient management and infection control management become feasible, enabling the transformation of clinical microbiology into a genome-based and personalized diagnostic field. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Leonard, Susan R.; Mammel, Mark K.; Lacher, David W.
2015-01-01
Culture-independent diagnostics reduce the reliance on traditional (and slower) culture-based methodologies. Here we capitalize on advances in next-generation sequencing (NGS) to apply this approach to food pathogen detection utilizing NGS as an analytical tool. In this study, spiking spinach with Shiga toxin-producing Escherichia coli (STEC) following an established FDA culture-based protocol was used in conjunction with shotgun metagenomic sequencing to determine the limits of detection, sensitivity, and specificity levels and to obtain information on the microbiology of the protocol. We show that an expected level of contamination (∼10 CFU/100 g) could be adequately detected (including key virulence determinants and strain-level specificity) within 8 h of enrichment at a sequencing depth of 10,000,000 reads. We also rationalize the relative benefit of static versus shaking culture conditions and the addition of selected antimicrobial agents, thereby validating the long-standing culture-based parameters behind such protocols. Moreover, the shotgun metagenomic approach was informative regarding the dynamics of microbial communities during the enrichment process, including initial surveys of the microbial loads associated with bagged spinach; the microbes found included key genera such as Pseudomonas, Pantoea, and Exiguobacterium. Collectively, our metagenomic study highlights and considers various parameters required for transitioning to such sequencing-based diagnostics for food safety and the potential to develop better enrichment processes in a high-throughput manner not previously possible. Future studies will investigate new species-specific DNA signature target regimens, rational design of medium components in concert with judicious use of additives, such as antibiotics, and alterations in the sample processing protocol to enhance detection. PMID:26386062
Strategies for automatic processing of large aftershock sequences
NASA Astrophysics Data System (ADS)
Kvaerna, T.; Gibbons, S. J.
2017-12-01
Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.
Sanchez-Lite, Alberto; Garcia, Manuel; Domingo, Rosario; Angel Sebastian, Miguel
2013-01-01
Musculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle. This paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process. The method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.
Portable and Error-Free DNA-Based Data Storage.
Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica
2017-07-10
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLoughlin, Kevin
2016-01-11
This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive featuremore » of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.« less
Content, Structure, and Sequence of the Detailing Discipline at Kendall College of Art and Design.
ERIC Educational Resources Information Center
Mulder, Bruce E.
A study identified the appropriate general content, structure, and sequence for a detailing discipline that promoted student achievement to professional levels. Its focus was the detailing discipline, a sequence of studio courses within the furniture design program at Kendall College of Art and Design, Grand Rapids, Michigan. (Detailing, an…
A Systematic Bayesian Integration of Epidemiological and Genetic Data
Lau, Max S. Y.; Marion, Glenn; Streftaris, George; Gibson, Gavin
2015-01-01
Genetic sequence data on pathogens have great potential to inform inference of their transmission dynamics ultimately leading to better disease control. Where genetic change and disease transmission occur on comparable timescales additional information can be inferred via the joint analysis of such genetic sequence data and epidemiological observations based on clinical symptoms and diagnostic tests. Although recently introduced approaches represent substantial progress, for computational reasons they approximate genuine joint inference of disease dynamics and genetic change in the pathogen population, capturing partially the joint epidemiological-evolutionary dynamics. Improved methods are needed to fully integrate such genetic data with epidemiological observations, for achieving a more robust inference of the transmission tree and other key epidemiological parameters such as latent periods. Here, building on current literature, a novel Bayesian framework is proposed that infers simultaneously and explicitly the transmission tree and unobserved transmitted pathogen sequences. Our framework facilitates the use of realistic likelihood functions and enables systematic and genuine joint inference of the epidemiological-evolutionary process from partially observed outbreaks. Using simulated data it is shown that this approach is able to infer accurately joint epidemiological-evolutionary dynamics, even when pathogen sequences and epidemiological data are incomplete, and when sequences are available for only a fraction of exposures. These results also characterise and quantify the value of incomplete and partial sequence data, which has important implications for sampling design, and demonstrate the abilities of the introduced method to identify multiple clusters within an outbreak. The framework is used to analyse an outbreak of foot-and-mouth disease in the UK, enhancing current understanding of its transmission dynamics and evolutionary process. PMID:26599399
HIV-1 transmission linkage in an HIV-1 prevention clinical trial
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leitner, Thomas; Campbell, Mary S; Mullins, James I
2009-01-01
HIV-1 sequencing has been used extensively in epidemiologic and forensic studies to investigate patterns of HIV-1 transmission. However, the criteria for establishing genetic linkage between HIV-1 strains in HIV-1 prevention trials have not been formalized. The Partners in Prevention HSV/HIV Transmission Study (ClinicaITrials.gov NCT00194519) enrolled 3408 HIV-1 serodiscordant heterosexual African couples to determine the efficacy of genital herpes suppression with acyclovir in reducing HIV-1 transmission. The trial analysis required laboratory confirmation of HIV-1 linkage between enrolled partners in couples in which seroconversion occurred. Here we describe the process and results from HIV-1 sequencing studies used to perform transmission linkage determinationmore » in this clinical trial. Consensus Sanger sequencing of env (C2-V3-C3) and gag (p17-p24) genes was performed on plasma HIV-1 RNA from both partners within 3 months of seroconversion; env single molecule or pyrosequencing was also performed in some cases. For linkage, we required monophyletic clustering between HIV-1 sequences in the transmitting and seroconverting partners, and developed a Bayesian algorithm using genetic distances to evaluate the posterior probability of linkage of participants sequences. Adjudicators classified transmissions as linked, unlinked, or indeterminate. Among 151 seroconversion events, we found 108 (71.5%) linked, 40 (26.5%) unlinked, and 3 (2.0%) to have indeterminate transmissions. Nine (8.3%) were linked by consensus gag sequencing only and 8 (7.4%) required deep sequencing of env. In this first use of HIV-1 sequencing to establish endpoints in a large clinical trial, more than one-fourth of transmissions were unlinked to the enrolled partner, illustrating the relevance of these methods in the design of future HIV-1 prevention trials in serodiscordant couples. A hierarchy of sequencing techniques, analysis methods, and expert adjudication contributed to the linkage determination process.« less
Effects of temperature and mass conservation on the typical chemical sequences of hydrogen oxidation
NASA Astrophysics Data System (ADS)
Nicholson, Schuyler B.; Alaghemandi, Mohammad; Green, Jason R.
2018-01-01
Macroscopic properties of reacting mixtures are necessary to design synthetic strategies, determine yield, and improve the energy and atom efficiency of many chemical processes. The set of time-ordered sequences of chemical species are one representation of the evolution from reactants to products. However, only a fraction of the possible sequences is typical, having the majority of the joint probability and characterizing the succession of chemical nonequilibrium states. Here, we extend a variational measure of typicality and apply it to atomistic simulations of a model for hydrogen oxidation over a range of temperatures. We demonstrate an information-theoretic methodology to identify typical sequences under the constraints of mass conservation. Including these constraints leads to an improved ability to learn the chemical sequence mechanism from experimentally accessible data. From these typical sequences, we show that two quantities defining the variational typical set of sequences—the joint entropy rate and the topological entropy rate—increase linearly with temperature. These results suggest that, away from explosion limits, data over a narrow range of thermodynamic parameters could be sufficient to extrapolate these typical features of combustion chemistry to other conditions.
A modular DNA signal translator for the controlled release of a protein by an aptamer.
Beyer, Stefan; Simmel, Friedrich C
2006-01-01
Owing to the intimate linkage of sequence and structure in nucleic acids, DNA is an extremely attractive molecule for the development of molecular devices, in particular when a combination of information processing and chemomechanical tasks is desired. Many of the previously demonstrated devices are driven by hybridization between DNA 'effector' strands and specific recognition sequences on the device. For applications it is of great interest to link several of such molecular devices together within artificial reaction cascades. Often it will not be possible to choose DNA sequences freely, e.g. when functional nucleic acids such as aptamers are used. In such cases translation of an arbitrary 'input' sequence into a desired effector sequence may be required. Here we demonstrate a molecular 'translator' for information encoded in DNA and show how it can be used to control the release of a protein by an aptamer using an arbitrarily chosen DNA input strand. The function of the translator is based on branch migration and the action of the endonuclease FokI. The modular design of the translator facilitates the adaptation of the device to various input or output sequences.
A modular DNA signal translator for the controlled release of a protein by an aptamer
Beyer, Stefan; Simmel, Friedrich C.
2006-01-01
Owing to the intimate linkage of sequence and structure in nucleic acids, DNA is an extremely attractive molecule for the development of molecular devices, in particular when a combination of information processing and chemomechanical tasks is desired. Many of the previously demonstrated devices are driven by hybridization between DNA ‘effector’ strands and specific recognition sequences on the device. For applications it is of great interest to link several of such molecular devices together within artificial reaction cascades. Often it will not be possible to choose DNA sequences freely, e.g. when functional nucleic acids such as aptamers are used. In such cases translation of an arbitrary ‘input’ sequence into a desired effector sequence may be required. Here we demonstrate a molecular ‘translator’ for information encoded in DNA and show how it can be used to control the release of a protein by an aptamer using an arbitrarily chosen DNA input strand. The function of the translator is based on branch migration and the action of the endonuclease FokI. The modular design of the translator facilitates the adaptation of the device to various input or output sequences. PMID:16547201
MiDAS: the field guide to the microbes of activated sludge
McIlroy, Simon Jon; Saunders, Aaron Marc; Albertsen, Mads; Nierychlo, Marta; McIlroy, Bianca; Hansen, Aviaja Anna; Karst, Søren Michael; Nielsen, Jeppe Lund; Nielsen, Per Halkjær
2015-01-01
The Microbial Database for Activated Sludge (MiDAS) field guide is a freely available online resource linking the identity of abundant and process critical microorganisms in activated sludge wastewater treatment systems to available data related to their functional importance. Phenotypic properties of some of these genera are described, but most are known only from sequence data. The MiDAS taxonomy is a manual curation of the SILVA taxonomy that proposes a name for all genus-level taxa observed to be abundant by large-scale 16 S rRNA gene amplicon sequencing of full-scale activated sludge communities. The taxonomy can be used to classify unknown sequences, and the online MiDAS field guide links the identity to the available information about their morphology, diversity, physiology and distribution. The use of a common taxonomy across the field will provide a solid foundation for the study of microbial ecology of the activated sludge process and related treatment processes. The online MiDAS field guide is a collaborative workspace intended to facilitate a better understanding of the ecology of activated sludge and related treatment processes—knowledge that will be an invaluable resource for the optimal design and operation of these systems. Database URL: http://www.midasfieldguide.org PMID:26120139
Schmidt Am Busch, Marcel; Sedano, Audrey; Simonson, Thomas
2010-05-05
Protein fold recognition usually relies on a statistical model of each fold; each model is constructed from an ensemble of natural sequences belonging to that fold. A complementary strategy may be to employ sequence ensembles produced by computational protein design. Designed sequences can be more diverse than natural sequences, possibly avoiding some limitations of experimental databases. WE EXPLORE THIS STRATEGY FOR FOUR SCOP FAMILIES: Small Kunitz-type inhibitors (SKIs), Interleukin-8 chemokines, PDZ domains, and large Caspase catalytic subunits, represented by 43 structures. An automated procedure is used to redesign the 43 proteins. We use the experimental backbones as fixed templates in the folded state and a molecular mechanics model to compute the interaction energies between sidechain and backbone groups. Calculations are done with the Proteins@Home volunteer computing platform. A heuristic algorithm is used to scan the sequence and conformational space, yielding 200,000-300,000 sequences per backbone template. The results confirm and generalize our earlier study of SH2 and SH3 domains. The designed sequences ressemble moderately-distant, natural homologues of the initial templates; e.g., the SUPERFAMILY, profile Hidden-Markov Model library recognizes 85% of the low-energy sequences as native-like. Conversely, Position Specific Scoring Matrices derived from the sequences can be used to detect natural homologues within the SwissProt database: 60% of known PDZ domains are detected and around 90% of known SKIs and chemokines. Energy components and inter-residue correlations are analyzed and ways to improve the method are discussed. For some families, designed sequences can be a useful complement to experimental ones for homologue searching. However, improved tools are needed to extract more information from the designed profiles before the method can be of general use.
E-TALEN: a web tool to design TALENs for genome engineering.
Heigwer, Florian; Kerr, Grainne; Walther, Nike; Glaeser, Kathrin; Pelz, Oliver; Breinig, Marco; Boutros, Michael
2013-11-01
Use of transcription activator-like effector nucleases (TALENs) is a promising new technique in the field of targeted genome engineering, editing and reverse genetics. Its applications span from introducing knockout mutations to endogenous tagging of proteins and targeted excision repair. Owing to this wide range of possible applications, there is a need for fast and user-friendly TALEN design tools. We developed E-TALEN (http://www.e-talen.org), a web-based tool to design TALENs for experiments of varying scale. E-TALEN enables the design of TALENs against a single target or a large number of target genes. We significantly extended previously published design concepts to consider genomic context and different applications. E-TALEN guides the user through an end-to-end design process of de novo TALEN pairs, which are specific to a certain sequence or genomic locus. Furthermore, E-TALEN offers a functionality to predict targeting and specificity for existing TALENs. Owing to the computational complexity of many of the steps in the design of TALENs, particular emphasis has been put on the implementation of fast yet accurate algorithms. We implemented a user-friendly interface, from the input parameters to the presentation of results. An additional feature of E-TALEN is the in-built sequence and annotation database available for many organisms, including human, mouse, zebrafish, Drosophila and Arabidopsis, which can be extended in the future.
NASA Astrophysics Data System (ADS)
Palo, S. E.; Li, X.; Woods, T. N.; Kohnert, R.
2014-12-01
There is a long history of cooperation between students at the University of Colorado, Boulder and professional engineers and scientists at LASP, which has led to many successful space missions with direct student involvement. The recent student-led missions include the Student Nitric Oxide Explorer (SNOE, 1998 - 2002), the Student Dust Counter (SDC) on New Horizons (2006 - present), the Colorado Student Space Weather Experiment (CSSWE), being a very successful NSF CubeSat that launched in September 2012, and the NASA Miniature X-ray Solar Spectrometer (MinXSS) CubeSat (launch will be in early 2015). Students are involved in all aspects of the design, and they experience the full scope of the mission process from concept, to fabrication and test, and mission operations. A significant part of the student involvement in the CubeSat projects is gained by using the CubeSat development as a focal point for an existing two-semester course sequence in CU's Aerospace Engineering Sciences (AES) Department: the Space Hardware Design section of Graduate Projects I & II (ASEN 5018 & ASEN 6028). The goal of these courses is to teach graduate students how to design and build systems using a requirement-based approach and fundamental systems engineering practices. The two-semester sequence takes teams of about 15 students from requirements definition and preliminary design through manufacturing, integration, and testing. In addition to the design process, students learn key professional skills such as working effectively in groups, finding solutions to open-ended problems, and actually building a system to their own set of specifications. The partnership between AES and LASP allows us to include engineering professionals in the mix, thus more effectively training science and engineering students for future roles in the civilian or commercial space industry. The mentoring process with LASP engineers helps to mitigate risk of the inexperience of the students and ensures consistent system engineer oversight for the multi-year CubeSat programs.
ERIC Educational Resources Information Center
Murdock, Margaret; Holman, R. W.; Slade, Tyler; Clark, Shelley L. D.; Rodnick, Kenneth J.
2014-01-01
A unique homework assignment has been designed as a review exercise to be implemented near the end of the one-year undergraduate organic chemistry sequence. Within the framework of the exercise, students derive potential mechanisms for glucose ring opening in the aqueous mutarotation process. In this endeavor, 21 general review principles are…
ERIC Educational Resources Information Center
Hass-Cohen, Noah; Clyde Findlay, Joanna; Carr, Richard; Vanderlan, Jessica
2014-01-01
The Check ("Check, Change What You Need To Change and/or Keep What You Want") art therapy protocol is a sequence of directives for treating trauma that is grounded in neurobiological theory and designed to facilitate trauma narrative processing, autobiographical coherency, and the rebalancing of dysregulated responses to psychosocial…
ERIC Educational Resources Information Center
Atterbury, Rob
2014-01-01
This guide provides an overview of a more robust online guide and toolkit available through ConnectEd Studios. It supplies a glimpse of the sequence of steps involved in creating a new Linked Learning pathway. This publication can help coaches, district leadership, and pathway teams gain an understanding of the overall process of designing and…
Large area, low cost solar cell development and production readiness
NASA Technical Reports Server (NTRS)
Michaels, D.
1982-01-01
A process sequence for a large area ( or = 25 sq. cm) silicon solar cell was investigated. Generic cell choice was guided by the expected electron fluence, by the packing factors of various cell envelope designs onto each panel to provide needed voltage as well as current, by the weight constraints on the system, and by the cost goals of the contract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xiaofan; Peris, David; Kominek, Jacek
The availability of genomes across the tree of life is highly biased toward vertebrates, pathogens, human disease models, and organisms with relatively small and simple genomes. Recent progress in genomics has enabled the de novo decoding of the genome of virtually any organism, greatly expanding its potential for understanding the biology and evolution of the full spectrum of biodiversity. The increasing diversity of sequencing technologies, assays, and de novo assembly algorithms have augmented the complexity of de novo genome sequencing projects in nonmodel organisms. To reduce the costs and challenges in de novo genome sequencing projects and streamline their experimentalmore » design and analysis, we developed iWGS (in silico Whole Genome Sequencer and Analyzer), an automated pipeline for guiding the choice of appropriate sequencing strategy and assembly protocols. iWGS seamlessly integrates the four key steps of a de novo genome sequencing project: data generation (through simulation), data quality control, de novo assembly, and assembly evaluation and validation. The last three steps can also be applied to the analysis of real data. iWGS is designed to enable the user to have great flexibility in testing the range of experimental designs available for genome sequencing projects, and supports all major sequencing technologies and popular assembly tools. Three case studies illustrate how iWGS can guide the design of de novo genome sequencing projects, and evaluate the performance of a wide variety of user-specified sequencing strategies and assembly protocols on genomes of differing architectures. iWGS, along with a detailed documentation, is freely available at https://github.com/zhouxiaofan1983/iWGS.« less
Zhou, Xiaofan; Peris, David; Kominek, Jacek; ...
2016-09-16
The availability of genomes across the tree of life is highly biased toward vertebrates, pathogens, human disease models, and organisms with relatively small and simple genomes. Recent progress in genomics has enabled the de novo decoding of the genome of virtually any organism, greatly expanding its potential for understanding the biology and evolution of the full spectrum of biodiversity. The increasing diversity of sequencing technologies, assays, and de novo assembly algorithms have augmented the complexity of de novo genome sequencing projects in nonmodel organisms. To reduce the costs and challenges in de novo genome sequencing projects and streamline their experimentalmore » design and analysis, we developed iWGS (in silico Whole Genome Sequencer and Analyzer), an automated pipeline for guiding the choice of appropriate sequencing strategy and assembly protocols. iWGS seamlessly integrates the four key steps of a de novo genome sequencing project: data generation (through simulation), data quality control, de novo assembly, and assembly evaluation and validation. The last three steps can also be applied to the analysis of real data. iWGS is designed to enable the user to have great flexibility in testing the range of experimental designs available for genome sequencing projects, and supports all major sequencing technologies and popular assembly tools. Three case studies illustrate how iWGS can guide the design of de novo genome sequencing projects, and evaluate the performance of a wide variety of user-specified sequencing strategies and assembly protocols on genomes of differing architectures. iWGS, along with a detailed documentation, is freely available at https://github.com/zhouxiaofan1983/iWGS.« less
Residual Stresses and Critical Initial Flaw Size Analyses of Welds
NASA Technical Reports Server (NTRS)
Brust, Frederick W.; Raju, Ivatury, S.; Dawocke, David S.; Cheston, Derrick
2009-01-01
An independent assessment was conducted to determine the critical initial flaw size (CIFS) for the flange-to-skin weld in the Ares I-X Upper Stage Simulator (USS). A series of weld analyses are performed to determine the residual stresses in a critical region of the USS. Weld residual stresses both increase constraint and mean stress thereby having an important effect on the fatigue life. The purpose of the weld analyses was to model the weld process using a variety of sequences to determine the 'best' sequence in terms of weld residual stresses and distortions. The many factors examined in this study include weld design (single-V, double-V groove), weld sequence, boundary conditions, and material properties, among others. The results of this weld analysis are included with service loads to perform a fatigue and critical initial flaw size evaluation.
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
Houck, Hannes A.; De Bruycker, Kevin; Billiet, Stijn; Dhanis, Bastiaan; Goossens, Hannelore; Catak, Saron; Van Speybroeck, Veronique
2017-01-01
The reaction of triazolinediones (TADs) and indoles is of particular interest for polymer chemistry applications, as it is a very fast and irreversible additive-free process at room temperature, but can be turned into a dynamic covalent bond forming process at elevated temperatures, giving a reliable bond exchange or ‘transclick’ reaction. In this paper, we report an in-depth study aimed at controlling the TAD–indole reversible click reactions through rational design of modified indole reaction partners. This has resulted in the identification of a novel class of easily accessible indole derivatives that give dynamic TAD-adduct formation at significantly lower temperatures. We further demonstrate that these new substrates can be used to design a directed cascade of click reactions of a functionalized TAD moiety from an initial indole reaction partner to a second indole, and finally to an irreversible reaction partner. This controlled sequence of click and transclick reactions of a single TAD reagent between three different substrates has been demonstrated both on small molecule and macromolecular level, and the factors that control the reversibility profiles have been rationalized and guided by mechanistic considerations supported by theoretical calculations. PMID:28507685
Mulder, Willem H; Crawford, Forrest W
2015-01-07
Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.
Binding branched and linear DNA structures: From isolated clusters to fully bonded gels
NASA Astrophysics Data System (ADS)
Fernandez-Castanon, J.; Bomboi, F.; Sciortino, F.
2018-01-01
The proper design of DNA sequences allows for the formation of well-defined supramolecular units with controlled interactions via a consecution of self-assembling processes. Here, we benefit from the controlled DNA self-assembly to experimentally realize particles with well-defined valence, namely, tetravalent nanostars (A) and bivalent chains (B). We specifically focus on the case in which A particles can only bind to B particles, via appropriately designed sticky-end sequences. Hence AA and BB bonds are not allowed. Such a binary mixture system reproduces with DNA-based particles the physics of poly-functional condensation, with an exquisite control over the bonding process, tuned by the ratio, r, between B and A units and by the temperature, T. We report dynamic light scattering experiments in a window of Ts ranging from 10 °C to 55 °C and an interval of r around the percolation transition to quantify the decay of the density correlation for the different cases. At low T, when all possible bonds are formed, the system behaves as a fully bonded network, as a percolating gel, and as a cluster fluid depending on the selected r.
Designing oligo libraries taking alternative splicing into account
NASA Astrophysics Data System (ADS)
Shoshan, Avi; Grebinskiy, Vladimir; Magen, Avner; Scolnicov, Ariel; Fink, Eyal; Lehavi, David; Wasserman, Alon
2001-06-01
We have designed sequences for DNA microarrays and oligo libraries, taking alternative splicing into account. Alternative splicing is a common phenomenon, occurring in more than 25% of the human genes. In many cases, different splice variants have different functions, are expressed in different tissues or may indicate different stages of disease. When designing sequences for DNA microarrays or oligo libraries, it is very important to take into account the sequence information of all the mRNA transcripts. Therefore, when a gene has more than one transcript (as a result of alternative splicing, alternative promoter sites or alternative poly-adenylation sites), it is very important to take all of them into account in the design. We have used the LEADS transcriptome prediction system to cluster and assemble the human sequences in GenBank and design optimal oligonucleotides for all the human genes with a known mRNA sequence based on the LEADS predictions.
Multimodal and ubiquitous computing systems: supporting independent-living older users.
Perry, Mark; Dowdall, Alan; Lines, Lorna; Hone, Kate
2004-09-01
We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a resident's home with sensors--these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications.
The Voyager spacecraft /James Watt International Gold Medal Lecture/
NASA Technical Reports Server (NTRS)
Heacock, R. L.
1980-01-01
The Voyager Project background is reviewed with emphasis on selected features of the Voyager spacecraft. Investigations by the Thermo-electric Outer Planets Spacecraft Project are discussed, including trajectories, design requirements, and the development of a Self Test and Repair computer, and a Computer Accessed Telemetry System. The design and configuration of the spacecraft are described, including long range communications, attitude control, solar independent power, sequencing and control data handling, and spacecraft propulsion. The development program, maintained by JPL, experienced a variety of problems such as design deficiencies, and process control and manufacturing problems. Finally, the spacecraft encounter with Jupiter is discussed, and expectations for the Saturn encounter are expressed.
Automated design evolution of stereochemically randomized protein foldamers
NASA Astrophysics Data System (ADS)
Ranbhor, Ranjit; Kumar, Anil; Patel, Kirti; Ramakrishnan, Vibin; Durani, Susheel
2018-05-01
Diversification of chain stereochemistry opens up the possibilities of an ‘in principle’ increase in the design space of proteins. This huge increase in the sequence and consequent structural variation is aimed at the generation of smart materials. To diversify protein structure stereochemically, we introduced L- and D-α-amino acids as the design alphabet. With a sequence design algorithm, we explored the usage of specific variables such as chirality and the sequence of this alphabet in independent steps. With molecular dynamics, we folded stereochemically diverse homopolypeptides and evaluated their ‘fitness’ for possible design as protein-like foldamers. We propose a fitness function to prune the most optimal fold among 1000 structures simulated with an automated repetitive simulated annealing molecular dynamics (AR-SAMD) approach. The highly scored poly-leucine fold with sequence lengths of 24 and 30 amino acids were later sequence-optimized using a Dead End Elimination cum Monte Carlo based optimization tool. This paper demonstrates a novel approach for the de novo design of protein-like foldamers.
Evidence of automatic processing in sequence learning using process-dissociation
Mong, Heather M.; McCabe, David P.; Clegg, Benjamin A.
2012-01-01
This paper proposes a way to apply process-dissociation to sequence learning in addition and extension to the approach used by Destrebecqz and Cleeremans (2001). Participants were trained on two sequences separated from each other by a short break. Following training, participants self-reported their knowledge of the sequences. A recognition test was then performed which required discrimination of two trained sequences, either under the instructions to call any sequence encountered in the experiment “old” (the inclusion condition), or only sequence fragments from one half of the experiment “old” (the exclusion condition). The recognition test elicited automatic and controlled process estimates using the process dissociation procedure, and suggested both processes were involved. Examining the underlying processes supporting performance may provide more information on the fundamental aspects of the implicit and explicit constructs than has been attainable through awareness testing. PMID:22679465
Performing skin microbiome research: A method to the madness
Kong, Heidi H.; Andersson, Björn; Clavel, Thomas; Common, John E.; Jackson, Scott A.; Olson, Nathan D.; Segre, Julia A.; Traidl-Hoffmann, Claudia
2017-01-01
Growing interest in microbial contributions to human health and disease has increasingly led investigators to examine the microbiome in both healthy skin and cutaneous disorders, including acne, psoriasis and atopic dermatitis. The need for common language, effective study design, and validated methods are critical for high-quality, standardized research. Features, unique to skin, pose particular challenges when conducting microbiome research. This review discusses microbiome research standards and highlights important factors to consider, including clinical study design, skin sampling, sample processing, DNA sequencing, control inclusion, and data analysis. PMID:28063650
Evolution-Inspired Computational Design of Symmetric Proteins.
Voet, Arnout R D; Simoncini, David; Tame, Jeremy R H; Zhang, Kam Y J
2017-01-01
Monomeric proteins with a number of identical repeats creating symmetrical structures are potentially very valuable building blocks with a variety of bionanotechnological applications. As such proteins do not occur naturally, the emerging field of computational protein design serves as an excellent tool to create them from nonsymmetrical templates. Existing pseudo-symmetrical proteins are believed to have evolved from oligomeric precursors by duplication and fusion of identical repeats. Here we describe a computational workflow to reverse-engineer this evolutionary process in order to create stable proteins consisting of identical sequence repeats.
Experimental studies of two-stage centrifugal dust concentrator
NASA Astrophysics Data System (ADS)
Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.
2018-03-01
The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.
Barbi, Florian; Bragalini, Claudia; Vallon, Laurent; Prudent, Elsa; Dubost, Audrey; Fraissinet-Tachet, Laurence; Marmeisse, Roland; Luis, Patricia
2014-01-01
Plant biomass degradation in soil is one of the key steps of carbon cycling in terrestrial ecosystems. Fungal saprotrophic communities play an essential role in this process by producing hydrolytic enzymes active on the main components of plant organic matter. Open questions in this field regard the diversity of the species involved, the major biochemical pathways implicated and how these are affected by external factors such as litter quality or climate changes. This can be tackled by environmental genomic approaches involving the systematic sequencing of key enzyme-coding gene families using soil-extracted RNA as material. Such an approach necessitates the design and evaluation of gene family-specific PCR primers producing sequence fragments compatible with high-throughput sequencing approaches. In the present study, we developed and evaluated PCR primers for the specific amplification of fungal CAZy Glycoside Hydrolase gene families GH5 (subfamily 5) and GH11 encoding endo-β-1,4-glucanases and endo-β-1,4-xylanases respectively as well as Basidiomycota class II peroxidases, corresponding to the CAZy Auxiliary Activity family 2 (AA2), active on lignin. These primers were experimentally validated using DNA extracted from a wide range of Ascomycota and Basidiomycota species including 27 with sequenced genomes. Along with the published primers for Glycoside Hydrolase GH7 encoding enzymes active on cellulose, the newly design primers were shown to be compatible with the Illumina MiSeq sequencing technology. Sequences obtained from RNA extracted from beech or spruce forest soils showed a high diversity and were uniformly distributed in gene trees featuring the global diversity of these gene families. This high-throughput sequencing approach using several degenerate primers constitutes a robust method, which allows the simultaneous characterization of the diversity of different fungal transcripts involved in plant organic matter degradation and may lead to the discovery of complex patterns in gene expression of soil fungal communities. PMID:25545363
BG7: A New Approach for Bacterial Genome Annotation Designed for Next Generation Sequencing Data
Pareja-Tobes, Pablo; Manrique, Marina; Pareja-Tobes, Eduardo; Pareja, Eduardo; Tobes, Raquel
2012-01-01
BG7 is a new system for de novo bacterial, archaeal and viral genome annotation based on a new approach specifically designed for annotating genomes sequenced with next generation sequencing technologies. The system is versatile and able to annotate genes even in the step of preliminary assembly of the genome. It is especially efficient detecting unexpected genes horizontally acquired from bacterial or archaeal distant genomes, phages, plasmids, and mobile elements. From the initial phases of the gene annotation process, BG7 exploits the massive availability of annotated protein sequences in databases. BG7 predicts ORFs and infers their function based on protein similarity with a wide set of reference proteins, integrating ORF prediction and functional annotation phases in just one step. BG7 is especially tolerant to sequencing errors in start and stop codons, to frameshifts, and to assembly or scaffolding errors. The system is also tolerant to the high level of gene fragmentation which is frequently found in not fully assembled genomes. BG7 current version – which is developed in Java, takes advantage of Amazon Web Services (AWS) cloud computing features, but it can also be run locally in any operating system. BG7 is a fast, automated and scalable system that can cope with the challenge of analyzing the huge amount of genomes that are being sequenced with NGS technologies. Its capabilities and efficiency were demonstrated in the 2011 EHEC Germany outbreak in which BG7 was used to get the first annotations right the next day after the first entero-hemorrhagic E. coli genome sequences were made publicly available. The suitability of BG7 for genome annotation has been proved for Illumina, 454, Ion Torrent, and PacBio sequencing technologies. Besides, thanks to its plasticity, our system could be very easily adapted to work with new technologies in the future. PMID:23185310
OVAS: an open-source variant analysis suite with inheritance modelling.
Mozere, Monika; Tekman, Mehmet; Kari, Jameela; Bockenhauer, Detlef; Kleta, Robert; Stanescu, Horia
2018-02-08
The advent of modern high-throughput genetics continually broadens the gap between the rising volume of sequencing data, and the tools required to process them. The need to pinpoint a small subset of functionally important variants has now shifted towards identifying the critical differences between normal variants and disease-causing ones. The ever-increasing reliance on cloud-based services for sequence analysis and the non-transparent methods they utilize has prompted the need for more in-situ services that can provide a safer and more accessible environment to process patient data, especially in circumstances where continuous internet usage is limited. To address these issues, we herein propose our standalone Open-source Variant Analysis Sequencing (OVAS) pipeline; consisting of three key stages of processing that pertain to the separate modes of annotation, filtering, and interpretation. Core annotation performs variant-mapping to gene-isoforms at the exon/intron level, append functional data pertaining the type of variant mutation, and determine hetero/homozygosity. An extensive inheritance-modelling module in conjunction with 11 other filtering components can be used in sequence ranging from single quality control to multi-file penetrance model specifics such as X-linked recessive or mosaicism. Depending on the type of interpretation required, additional annotation is performed to identify organ specificity through gene expression and protein domains. In the course of this paper we analysed an autosomal recessive case study. OVAS made effective use of the filtering modules to recapitulate the results of the study by identifying the prescribed compound-heterozygous disease pattern from exome-capture sequence input samples. OVAS is an offline open-source modular-driven analysis environment designed to annotate and extract useful variants from Variant Call Format (VCF) files, and process them under an inheritance context through a top-down filtering schema of swappable modules, run entirely off a live bootable medium and accessed locally through a web-browser.
In silico design of ligand triggered RNA switches.
Findeiß, Sven; Hammer, Stefan; Wolfinger, Michael T; Kühnl, Felix; Flamm, Christoph; Hofacker, Ivo L
2018-04-13
This contribution sketches a work flow to design an RNA switch that is able to adapt two structural conformations in a ligand-dependent way. A well characterized RNA aptamer, i.,e., knowing its K d and adaptive structural features, is an essential ingredient of the described design process. We exemplify the principles using the well-known theophylline aptamer throughout this work. The aptamer in its ligand-binding competent structure represents one structural conformation of the switch while an alternative fold that disrupts the binding-competent structure forms the other conformation. To keep it simple we do not incorporate any regulatory mechanism to control transcription or translation. We elucidate a commonly used design process by explicitly dissecting and explaining the necessary steps in detail. We developed a novel objective function which specifies the mechanistics of this simple, ligand-triggered riboswitch and describe an extensive in silico analysis pipeline to evaluate important kinetic properties of the designed sequences. This protocol and the developed software can be easily extended or adapted to fit novel design scenarios and thus can serve as a template for future needs. Copyright © 2018. Published by Elsevier Inc.
Spent nuclear fuel project cold vacuum drying facility operations manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
IRWIN, J.J.
This document provides the Operations Manual for the Cold Vacuum Drying Facility (CVDF). The Manual was developed in conjunction with HNF-SD-SNF-SAR-002, Safety Analysis Report for the Cold Vacuum Drying Facility, Phase 2, Supporting Installation of Processing Systems (Garvin 1998) and, the HNF-SD-SNF-DRD-002, 1997, Cold Vacuum Drying Facility Design Requirements, Rev. 3a. The Operations Manual contains general descriptions of all the process, safety and facility systems in the CVDF, a general CVD operations sequence, and has been developed for the SNFP Operations Organization and shall be updated, expanded, and revised in accordance with future design, construction and startup phases of themore » CVDF until the CVDF final ORR is approved.« less
Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks
2017-01-01
Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees. PMID:28545083
Morales, Lucia; Mateos-Gomez, Pedro A.; Capiscol, Carmen; del Palacio, Lorena; Sola, Isabel
2013-01-01
Preferential RNA packaging in coronaviruses involves the recognition of viral genomic RNA, a crucial process for viral particle morphogenesis mediated by RNA-specific sequences, known as packaging signals. An essential packaging signal component of transmissible gastroenteritis coronavirus (TGEV) has been further delimited to the first 598 nucleotides (nt) from the 5′ end of its RNA genome, by using recombinant viruses transcribing subgenomic mRNA that included potential packaging signals. The integrity of the entire sequence domain was necessary because deletion of any of the five structural motifs defined within this region abrogated specific packaging of this viral RNA. One of these RNA motifs was the stem-loop SL5, a highly conserved motif in coronaviruses located at nucleotide positions 106 to 136. Partial deletion or point mutations within this motif also abrogated packaging. Using TGEV-derived defective minigenomes replicated in trans by a helper virus, we have shown that TGEV RNA packaging is a replication-independent process. Furthermore, the last 494 nt of the genomic 3′ end were not essential for packaging, although this region increased packaging efficiency. TGEV RNA sequences identified as necessary for viral genome packaging were not sufficient to direct packaging of a heterologous sequence derived from the green fluorescent protein gene. These results indicated that TGEV genome packaging is a complex process involving many factors in addition to the identified RNA packaging signal. The identification of well-defined RNA motifs within the TGEV RNA genome that are essential for packaging will be useful for designing packaging-deficient biosafe coronavirus-derived vectors and providing new targets for antiviral therapies. PMID:23966403
Simultaneous inference of phylogenetic and transmission trees in infectious disease outbreaks.
Klinkenberg, Don; Backer, Jantien A; Didelot, Xavier; Colijn, Caroline; Wallinga, Jacco
2017-05-01
Whole-genome sequencing of pathogens from host samples becomes more and more routine during infectious disease outbreaks. These data provide information on possible transmission events which can be used for further epidemiologic analyses, such as identification of risk factors for infectivity and transmission. However, the relationship between transmission events and sequence data is obscured by uncertainty arising from four largely unobserved processes: transmission, case observation, within-host pathogen dynamics and mutation. To properly resolve transmission events, these processes need to be taken into account. Recent years have seen much progress in theory and method development, but existing applications make simplifying assumptions that often break up the dependency between the four processes, or are tailored to specific datasets with matching model assumptions and code. To obtain a method with wider applicability, we have developed a novel approach to reconstruct transmission trees with sequence data. Our approach combines elementary models for transmission, case observation, within-host pathogen dynamics, and mutation, under the assumption that the outbreak is over and all cases have been observed. We use Bayesian inference with MCMC for which we have designed novel proposal steps to efficiently traverse the posterior distribution, taking account of all unobserved processes at once. This allows for efficient sampling of transmission trees from the posterior distribution, and robust estimation of consensus transmission trees. We implemented the proposed method in a new R package phybreak. The method performs well in tests of both new and published simulated data. We apply the model to five datasets on densely sampled infectious disease outbreaks, covering a wide range of epidemiological settings. Using only sampling times and sequences as data, our analyses confirmed the original results or improved on them: the more realistic infection times place more confidence in the inferred transmission trees.
Computational Tools and Algorithms for Designing Customized Synthetic Genes
Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris
2014-01-01
Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations. PMID:25340050
Philip, Ashok; Stephens, Mark; Mitchell, Sheila L; Watkins, E Blake
2015-04-25
To provide students with an opportunity to participate in medicinal chemistry research within the doctor of pharmacy (PharmD) curriculum. We designed and implemented a 3-course sequence in drug design or drug synthesis for pharmacy students consisting of a 1-month advanced elective followed by two 1-month research advanced pharmacy practice experiences (APPEs). To maximize student involvement, this 3-course sequence was offered to third-year and fourth-year students twice per calendar year. Students were evaluated based on their commitment to the project's success, productivity, and professionalism. Students also evaluated the course sequence using a 14-item course evaluation rubric. Student feedback was overwhelmingly positive. Students found the experience to be a valuable component of their pharmacy curriculum. We successfully designed and implemented a 3-course research sequence that allows PharmD students in the traditional 4-year program to participate in drug design and synthesis research. Students report the sequence enhanced their critical-thinking and problem-solving skills and helped them develop as independent learners. Based on the success achieved with this sequence, efforts are underway to develop research APPEs in other areas of the pharmaceutical sciences.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
ClusterCAD: a computational platform for type I modular polyketide synthase design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.
Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less
ClusterCAD: a computational platform for type I modular polyketide synthase design
Eng, Clara H.; Backman, Tyler W H; Bailey, Constance B.; ...
2017-10-11
Here, we present ClusterCAD, a web-based toolkit designed to leverage the collinear structure and deterministic logic of type I modular polyketide synthases (PKSs) for synthetic biology applications. The unique organization of these megasynthases, combined with the diversity of their catalytic domain building blocks, has fueled an interest in harnessing the biosynthetic potential of PKSs for the microbial production of both novel natural product analogs and industrially relevant small molecules. However, a limited theoretical understanding of the determinants of PKS fold and function poses a substantial barrier to the design of active variants, and identifying strategies to reliably construct functional PKSmore » chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify the process of designing experiments to test strategies for engineering PKS variants. ClusterCAD provides chemical structures with stereochemistry for the intermediates generated by each PKS module, as well as sequence- and structure-based search tools that allow users to identify modules based either on amino acid sequence or on the chemical structure of the cognate polyketide intermediate. ClusterCAD can be accessed at https://clustercad.jbei.org and at http://clustercad.igb.uci.edu.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Weizhao; Zhang, Zixuan; Lu, Jie
Carbon fiber composites have received growing attention because of their high performance. One economic method to manufacturing the composite parts is the sequence of forming followed by the compression molding process. In this sequence, the preforming procedure forms the prepreg, which is the composite with the uncured resin, to the product geometry while the molding process cures the resin. Slip between different prepreg layers is observed in the preforming step and this paper reports a method to characterize the properties of the interaction between different prepreg layers, which is critical to predictive modeling and design optimization. An experimental setup wasmore » established to evaluate the interactions at various industrial production conditions. The experimental results were analyzed for an in-depth understanding about how the temperature, the relative sliding speed, and the fiber orientation affect the tangential interaction between two prepreg layers. The interaction factors measured from these experiments will be implemented in the computational preforming program.« less
Xiao, Xingqing; Agris, Paul F; Hall, Carol K
2016-05-01
A computational strategy that integrates our peptide search algorithm with atomistic molecular dynamics simulation was used to design rational peptide drugs that recognize and bind to the anticodon stem and loop domain (ASL(Lys3)) of human tRNAUUULys3 for the purpose of interrupting HIV replication. The score function of the search algorithm was improved by adding a peptide stability term weighted by an adjustable factor λ to the peptide binding free energy. The five best peptide sequences associated with five different values of λ were determined using the search algorithm and then input in atomistic simulations to examine the stability of the peptides' folded conformations and their ability to bind to ASL(Lys3). Simulation results demonstrated that setting an intermediate value of λ achieves a good balance between optimizing the peptide's binding ability and stabilizing its folded conformation during the sequence evolution process, and hence leads to optimal binding to the target ASL(Lys3). Thus, addition of a peptide stability term significantly improves the success rate for our peptide design search. © 2016 Wiley Periodicals, Inc.
A programming language for composable DNA circuits
Phillips, Andrew; Cardelli, Luca
2009-01-01
Recently, a range of information-processing circuits have been implemented in DNA by using strand displacement as their main computational mechanism. Examples include digital logic circuits and catalytic signal amplification circuits that function as efficient molecular detectors. As new paradigms for DNA computation emerge, the development of corresponding languages and tools for these paradigms will help to facilitate the design of DNA circuits and their automatic compilation to nucleotide sequences. We present a programming language for designing and simulating DNA circuits in which strand displacement is the main computational mechanism. The language includes basic elements of sequence domains, toeholds and branch migration, and assumes that strands do not possess any secondary structure. The language is used to model and simulate a variety of circuits, including an entropy-driven catalytic gate, a simple gate motif for synthesizing large-scale circuits and a scheme for implementing an arbitrary system of chemical reactions. The language is a first step towards the design of modelling and simulation tools for DNA strand displacement, which complements the emergence of novel implementation strategies for DNA computing. PMID:19535415
A programming language for composable DNA circuits.
Phillips, Andrew; Cardelli, Luca
2009-08-06
Recently, a range of information-processing circuits have been implemented in DNA by using strand displacement as their main computational mechanism. Examples include digital logic circuits and catalytic signal amplification circuits that function as efficient molecular detectors. As new paradigms for DNA computation emerge, the development of corresponding languages and tools for these paradigms will help to facilitate the design of DNA circuits and their automatic compilation to nucleotide sequences. We present a programming language for designing and simulating DNA circuits in which strand displacement is the main computational mechanism. The language includes basic elements of sequence domains, toeholds and branch migration, and assumes that strands do not possess any secondary structure. The language is used to model and simulate a variety of circuits, including an entropy-driven catalytic gate, a simple gate motif for synthesizing large-scale circuits and a scheme for implementing an arbitrary system of chemical reactions. The language is a first step towards the design of modelling and simulation tools for DNA strand displacement, which complements the emergence of novel implementation strategies for DNA computing.
Model annotation for synthetic biology: automating model to nucleotide sequence conversion
Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil
2011-01-01
Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762
NASA Astrophysics Data System (ADS)
Alshakova, E. L.
2017-01-01
The program in the AutoLISP language allows automatically to form parametrical drawings during the work in the AutoCAD software product. Students study development of programs on AutoLISP language with the use of the methodical complex containing methodical instructions in which real examples of creation of images and drawings are realized. Methodical instructions contain reference information necessary for the performance of the offered tasks. The method of step-by-step development of the program is the basis for training in programming on AutoLISP language: the program draws elements of the drawing of a detail by means of definitely created function which values of arguments register in that sequence in which AutoCAD gives out inquiries when performing the corresponding command in the editor. The process of the program design is reduced to the process of step-by-step formation of functions and sequence of their calls. The author considers the development of the AutoLISP program for the creation of parametrical drawings of details, the defined design, the user enters the dimensions of elements of details. These programs generate variants of tasks of the graphic works performed in educational process of "Engineering graphics", "Engineering and computer graphics" disciplines. Individual tasks allow to develop at students skills of independent work in reading and creation of drawings, as well as 3D modeling.
Two-level optimization of composite wing structures based on panel genetic optimization
NASA Astrophysics Data System (ADS)
Liu, Boyang
The design of complex composite structures used in aerospace or automotive vehicles presents a major challenge in terms of computational cost. Discrete choices for ply thicknesses and ply angles leads to a combinatorial optimization problem that is too expensive to solve with presently available computational resources. We developed the following methodology for handling this problem for wing structural design: we used a two-level optimization approach with response-surface approximations to optimize panel failure loads for the upper-level wing optimization. We tailored efficient permutation genetic algorithms to the panel stacking sequence design on the lower level. We also developed approach for improving continuity of ply stacking sequences among adjacent panels. The decomposition approach led to a lower-level optimization of stacking sequence with a given number of plies in each orientation. An efficient permutation genetic algorithm (GA) was developed for handling this problem. We demonstrated through examples that the permutation GAs are more efficient for stacking sequence optimization than a standard GA. Repair strategies for standard GA and the permutation GAs for dealing with constraints were also developed. The repair strategies can significantly reduce computation costs for both standard GA and permutation GA. A two-level optimization procedure for composite wing design subject to strength and buckling constraints is presented. At wing-level design, continuous optimization of ply thicknesses with orientations of 0°, 90°, and +/-45° is performed to minimize weight. At the panel level, the number of plies of each orientation (rounded to integers) and inplane loads are specified, and a permutation genetic algorithm is used to optimize the stacking sequence. The process begins with many panel genetic optimizations for a range of loads and numbers of plies of each orientation. Next, a cubic polynomial response surface is fitted to the optimum buckling load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.
Isothermal folding of a light-up bio-orthogonal RNA origami nanoribbon.
Torelli, Emanuela; Kozyra, Jerzy Wieslaw; Gu, Jing-Ying; Stimming, Ulrich; Piantanida, Luca; Voïtchovsky, Kislon; Krasnogor, Natalio
2018-05-03
RNA presents intringuing roles in many cellular processes and its versatility underpins many different applications in synthetic biology. Nonetheless, RNA origami as a method for nanofabrication is not yet fully explored and the majority of RNA nanostructures are based on natural pre-folded RNA. Here we describe a biologically inert and uniquely addressable RNA origami scaffold that self-assembles into a nanoribbon by seven staple strands. An algorithm is applied to generate a synthetic De Bruijn scaffold sequence that is characterized by the lack of biologically active sites and repetitions larger than a predetermined design parameter. This RNA scaffold and the complementary staples fold in a physiologically compatible isothermal condition. In order to monitor the folding, we designed a new split Broccoli aptamer system. The aptamer is divided into two nonfunctional sequences each of which is integrated into the 5' or 3' end of two staple strands complementary to the RNA scaffold. Using fluorescence measurements and in-gel imaging, we demonstrate that once RNA origami assembly occurs, the split aptamer sequences are brought into close proximity forming the aptamer and turning on the fluorescence. This light-up 'bio-orthogonal' RNA origami provides a prototype that can have potential for in vivo origami applications.
Predicting Silk Fiber Mechanical Properties through Multiscale Simulation and Protein Design.
Rim, Nae-Gyune; Roberts, Erin G; Ebrahimi, Davoud; Dinjaski, Nina; Jacobsen, Matthew M; Martín-Moldes, Zaira; Buehler, Markus J; Kaplan, David L; Wong, Joyce Y
2017-08-14
Silk is a promising material for biomedical applications, and much research is focused on how application-specific, mechanical properties of silk can be designed synthetically through proper amino acid sequences and processing parameters. This protocol describes an iterative process between research disciplines that combines simulation, genetic synthesis, and fiber analysis to better design silk fibers with specific mechanical properties. Computational methods are used to assess the protein polymer structure as it forms an interconnected fiber network through shearing and how this process affects fiber mechanical properties. Model outcomes are validated experimentally with the genetic design of protein polymers that match the simulation structures, fiber fabrication from these polymers, and mechanical testing of these fibers. Through iterative feedback between computation, genetic synthesis, and fiber mechanical testing, this protocol will enable a priori prediction capability of recombinant material mechanical properties via insights from the resulting molecular architecture of the fiber network based entirely on the initial protein monomer composition. This style of protocol may be applied to other fields where a research team seeks to design a biomaterial with biomedical application-specific properties. This protocol highlights when and how the three research groups (simulation, synthesis, and engineering) should be interacting to arrive at the most effective method for predictive design of their material.
SwaMURAy - Swapping Memory Unit for Radio Astronomy
NASA Astrophysics Data System (ADS)
Winberg, Simon
2016-03-01
This paper concerns design and performance testing of an HDL module called SwaMURAy that is a configurable, high-speed data sequencing and flow control module serving as an intermediary between data acquisition and subsequent processing stages. While a FIFO suffices for many applications, our case needed a more elaborate solution to overcome legacy design limitations. The SwaMURAy is designed around a system where a block of sampled data is acquired at a fast rate and is then distributed among multiple processing paths to achieve a desired overall processing rate. This architecture provides an effective design pattern around which various software defined radio (SDR) and radio astronomy applications can be built. This solution was partly in response to legacy design restrictions of the SDR platform we used, a difficulty likely experienced by many developers whereby new sampling peripherals are inhibited by legacy characteristics of an underlying reconfigurable platform. Our SDR platform had a planned lifetime of at least five years as a complete redesign and refabrication would be too costly. While the SwaMURAy overcame some performance problems, other problems arose. This paper overviews the SwaMURAy design, performance improvements achieved in an SDR case study, and discusses remaining limitations and workarounds we expect will achieve further improvements.
[Design of a HACCP Plan for the Gouda-type cheesemaking process in a milk processing plant].
Dávila, Jacqueline; Reyes, Genara; Corzo, Otoniel
2006-03-01
The Hazard Analysis and Critical Control Point (HACCP) is a preventive and systematic method used to identify, assess and control of the hazards related with raw material, ingredients, processing, marketing and intended consumer in order to assure the safety of the food. The aim of this study was to design a HACCP plan for implementing in a Gouda-type cheese-making process in a dairy processing plant. The used methodology was based in the application of the seven principles of the HACCP, the information from the plant about the compliment of the pre-requisite programs (70-80%), the experience of the HACCP team and the sequence of stages settles down by the COVENIN standard 3802 for implementing the HACCP system. A HACCP plan was proposed with the scope, the selection of HACCP team, the description of the product and the intended use, the flow diagram of the process, the hazard analysis and the control table of the plan with the critical control points (CCP). The following CCP were identified in the process: pasteurization, coagulation and ripening.
Empowering biomedical engineering undergraduates to help teach design.
Allen, Robert H; Tam, William; Shoukas, Artin A
2004-01-01
We report on our experience empowering upperclassmen and seniors to help teach design courses in biomedical engineering. Initiated in the fall of 1998, these courses are a projects-based set, where teams of students from freshmen level to senior level converge to solve practical problems in biomedical engineering. One goal in these courses is to teach the design process by providing experiences that mimic it. Student teams solve practical projects solicited from faculty, industry and the local community. To hone skills and have a metric for grading, written documentation, posters and oral presentations are required over the two-semester sequence. By requiring a mock design and build exercise in the fall, students appreciate the manufacturing process, the difficulties unforeseen in the design stage and the importance of testing. A Web-based, searchable design repository captures reporting information from each project since its inception. This serves as a resource for future projects, in addition to traditional ones such as library, outside experts and lab facilities. Based on results to date, we conclude that characteristics about our design program help students experience design and learn aspects about teamwork and mentoring useful in their profession or graduate education.
Object-oriented software design in semiautomatic building extraction
NASA Astrophysics Data System (ADS)
Guelch, Eberhard; Mueller, Hardo
1997-08-01
Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.
Using logic models in a community-based agricultural injury prevention project.
Helitzer, Deborah; Willging, Cathleen; Hathorn, Gary; Benally, Jeannie
2009-01-01
The National Institute for Occupational Safety and Health has long promoted the logic model as a useful tool in an evaluator's portfolio. Because a logic model supports a systematic approach to designing interventions, it is equally useful for program planners. Undertaken with community stakeholders, a logic model process articulates the underlying foundations of a particular programmatic effort and enhances program design and evaluation. Most often presented as sequenced diagrams or flow charts, logic models demonstrate relationships among the following components: statement of a problem, various causal and mitigating factors related to that problem, available resources to address the problem, theoretical foundations of the selected intervention, intervention goals and planned activities, and anticipated short- and long-term outcomes. This article describes a case example of how a logic model process was used to help community stakeholders on the Navajo Nation conceive, design, implement, and evaluate agricultural injury prevention projects.
Mariner 9 mapping science sequence design.
NASA Technical Reports Server (NTRS)
Goldman, A. M., Jr.
1973-01-01
The primary mission of Mariner 9 was to map the Martian surface. This paper discusses in detail the design of the mapping science sequences which were executed by the spacecraft in sixty days and during which over eighty percent of the surface was photographed. The sequence design was influenced by many factors: experimenter scientific objectives, instrument capabilities, spacecraft capabilities, orbit characteristics, and data return rates, which are illustrated graphically. Typical orbits are depicted for each of the three different mapping phases lasting twenty days. Examples of typical orbital sequence plans prepared daily during mission operations are given.
A computational proposal for designing structured RNA pools for in vitro selection of RNAs.
Kim, Namhee; Gan, Hin Hark; Schlick, Tamar
2007-04-01
Although in vitro selection technology is a versatile experimental tool for discovering novel synthetic RNA molecules, finding complex RNA molecules is difficult because most RNAs identified from random sequence pools are simple motifs, consistent with recent computational analysis of such sequence pools. Thus, enriching in vitro selection pools with complex structures could increase the probability of discovering novel RNAs. Here we develop an approach for engineering sequence pools that links RNA sequence space regions with corresponding structural distributions via a "mixing matrix" approach combined with a graph theory analysis. We define five classes of mixing matrices motivated by covariance mutations in RNA; these constructs define nucleotide transition rates and are applied to chosen starting sequences to yield specific nonrandom pools. We examine the coverage of sequence space as a function of the mixing matrix and starting sequence via clustering analysis. We show that, in contrast to random sequences, which are associated only with a local region of sequence space, our designed pools, including a structured pool for GTP aptamers, can target specific motifs. It follows that experimental synthesis of designed pools can benefit from using optimized starting sequences, mixing matrices, and pool fractions associated with each of our constructed pools as a guide. Automation of our approach could provide practical tools for pool design applications for in vitro selection of RNAs and related problems.
Primer-Free Aptamer Selection Using A Random DNA Library
Pan, Weihua; Xin, Ping; Patrick, Susan; Dean, Stacey; Keating, Christine; Clawson, Gary
2010-01-01
Aptamers are highly structured oligonucleotides (DNA or RNA) that can bind to targets with affinities comparable to antibodies 1. They are identified through an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX) to recognize a wide variety of targets, from small molecules to proteins and other macromolecules 2-4. Aptamers have properties that are well suited for in vivo diagnostic and/or therapeutic applications: Besides good specificity and affinity, they are easily synthesized, survive more rigorous processing conditions, they are poorly immunogenic, and their relatively small size can result in facile penetration of tissues. Aptamers that are identified through the standard SELEX process usually comprise ~80 nucleotides (nt), since they are typically selected from nucleic acid libraries with ~40 nt long randomized regions plus fixed primer sites of ~20 nt on each side. The fixed primer sequences thus can comprise nearly ~50% of the library sequences, and therefore may positively or negatively compromise identification of aptamers in the selection process 3, although bioinformatics approaches suggest that the fixed sequences do not contribute significantly to aptamer structure after selection 5. To address these potential problems, primer sequences have been blocked by complementary oligonucleotides or switched to different sequences midway during the rounds of SELEX 6, or they have been trimmed to 6-9 nt 7, 8. Wen and Gray 9 designed a primer-free genomic SELEX method, in which the primer sequences were completely removed from the library before selection and were then regenerated to allow amplification of the selected genomic fragments. However, to employ the technique, a unique genomic library has to be constructed, which possesses limited diversity, and regeneration after rounds of selection relies on a linear reamplification step. Alternatively, efforts to circumvent problems caused by fixed primer sequences using high efficiency partitioning are met with problems regarding PCR amplification 10. We have developed a primer-free (PF) selection method that significantly simplifies SELEX procedures and effectively eliminates primer-interference problems 11, 12. The protocols work in a straightforward manner. The central random region of the library is purified without extraneous flanking sequences and is bound to a suitable target (for example to a purified protein or complex mixtures such as cell lines). Then the bound sequences are obtained, reunited with flanking sequences, and re-amplified to generate selected sub-libraries. As an example, here we selected aptamers to S100B, a protein marker for melanoma. Binding assays showed Kd s in the 10-7 - 10-8 M range after a few rounds of selection, and we demonstrate that the aptamers function effectively in a sandwich binding format. PMID:20689511
An efficient annotation and gene-expression derivation tool for Illumina Solexa datasets
2010-01-01
Background The data produced by an Illumina flow cell with all eight lanes occupied, produces well over a terabyte worth of images with gigabytes of reads following sequence alignment. The ability to translate such reads into meaningful annotation is therefore of great concern and importance. Very easily, one can get flooded with such a great volume of textual, unannotated data irrespective of read quality or size. CASAVA, a optional analysis tool for Illumina sequencing experiments, enables the ability to understand INDEL detection, SNP information, and allele calling. To not only extract from such analysis, a measure of gene expression in the form of tag-counts, but furthermore to annotate such reads is therefore of significant value. Findings We developed TASE (Tag counting and Analysis of Solexa Experiments), a rapid tag-counting and annotation software tool specifically designed for Illumina CASAVA sequencing datasets. Developed in Java and deployed using jTDS JDBC driver and a SQL Server backend, TASE provides an extremely fast means of calculating gene expression through tag-counts while annotating sequenced reads with the gene's presumed function, from any given CASAVA-build. Such a build is generated for both DNA and RNA sequencing. Analysis is broken into two distinct components: DNA sequence or read concatenation, followed by tag-counting and annotation. The end result produces output containing the homology-based functional annotation and respective gene expression measure signifying how many times sequenced reads were found within the genomic ranges of functional annotations. Conclusions TASE is a powerful tool to facilitate the process of annotating a given Illumina Solexa sequencing dataset. Our results indicate that both homology-based annotation and tag-count analysis are achieved in very efficient times, providing researchers to delve deep in a given CASAVA-build and maximize information extraction from a sequencing dataset. TASE is specially designed to translate sequence data in a CASAVA-build into functional annotations while producing corresponding gene expression measurements. Achieving such analysis is executed in an ultrafast and highly efficient manner, whether the analysis be a single-read or paired-end sequencing experiment. TASE is a user-friendly and freely available application, allowing rapid analysis and annotation of any given Illumina Solexa sequencing dataset with ease. PMID:20598141
Modal processing for acoustic communications in shallow water experiment.
Morozov, Andrey K; Preisig, James C; Papp, Joseph
2008-09-01
Acoustical array data from the Shallow Water Acoustics experiment was processed to show the feasibility of broadband mode decomposition as a preprocessing method to reduce the effective channel delay spread and concentrate received signal energy in a small number of independent channels. The data were collected by a vertical array designed at the Woods Hole Oceanographic Institution. Phase-shift Keying (PSK) m-sequence modulated signals with different carrier frequencies were transmitted at a distance 19.2 km from the array. Even during a strong internal waves activity a low bit error rate was achieved.
Airport Planning and Design - Legal and Professional Competence Requirements
NASA Astrophysics Data System (ADS)
Kazda, Antonin
2017-12-01
Airport design and planning considerably differs from the design of other transport infrastructure. The reasons are the wide scope of regulation in civil aviation and the lack of links between the Civil Aviation Act and the Building Act. The effect is that the sequence of procedures, negotiation, and/or document approval is not clearly defined. The situation is further complicated by the fact that an airport is a unique construction both for the investor and for the local building authority. The paper is an outcome of our research, building on long-term experience in airport planning and design, and the elucidation of planning and approval processes with experts from the Transport Authority and the Ministry of Transport and Construction of the Slovak Republic.
Helical Axis Data Visualization and Analysis of the Knee Joint Articulation.
Millán Vaquero, Ricardo Manuel; Vais, Alexander; Dean Lynch, Sean; Rzepecki, Jan; Friese, Karl-Ingo; Hurschler, Christof; Wolter, Franz-Erich
2016-09-01
We present processing methods and visualization techniques for accurately characterizing and interpreting kinematical data of flexion-extension motion of the knee joint based on helical axes. We make use of the Lie group of rigid body motions and particularly its Lie algebra for a natural representation of motion sequences. This allows to analyze and compute the finite helical axis (FHA) and instantaneous helical axis (IHA) in a unified way without redundant degrees of freedom or singularities. A polynomial fitting based on Legendre polynomials within the Lie algebra is applied to provide a smooth description of a given discrete knee motion sequence which is essential for obtaining stable instantaneous helical axes for further analysis. Moreover, this allows for an efficient overall similarity comparison across several motion sequences in order to differentiate among several cases. Our approach combines a specifically designed patient-specific three-dimensional visualization basing on the processed helical axes information and incorporating computed tomography (CT) scans for an intuitive interpretation of the axes and their geometrical relation with respect to the knee joint anatomy. In addition, in the context of the study of diseases affecting the musculoskeletal articulation, we propose to integrate the above tools into a multiscale framework for exploring related data sets distributed across multiple spatial scales. We demonstrate the utility of our methods, exemplarily processing a collection of motion sequences acquired from experimental data involving several surgery techniques. Our approach enables an accurate analysis, visualization and comparison of knee joint articulation, contributing to the evaluation and diagnosis in medical applications.
NASA Astrophysics Data System (ADS)
Mandala, Mahender Arjun
A cornerstone of design and design education is frequent situated feedback. With increasing class sizes, and shrinking financial and human resources, providing rich feedback to students becomes increasingly difficult. In the field of writing, web-based peer review--the process of utilizing equal status learners within a class to provide feedback to each other on their work using networked computing systems--has been shown to be a reliable and valid source of feedback in addition to improving student learning. Designers communicate in myriad ways, using the many languages of design and combining visual and descriptive information. This complex discourse of design intent makes peer reviews by design students ambiguous and often not helpful to the receivers of this feedback. Furthermore, engaging students in the review process itself is often difficult. Teams can complement individual diversity and may assist novice designers collectively resolve complex task. However, teams often incur production losses and may be impacted by individual biases. In the current work, we look at utilizing a collaborative team of reviewers, working collectively and synchronously, in generating web based peer reviews in a sophomore engineering design class. Students participated in a cross-over design, conducting peer reviews as individuals and collaborative teams in parallel sequences. Raters coded the feedback generated on the basis of their appropriateness and accuracy. Self-report surveys and passive observation of teams conducting reviews captured student opinion on the process, its value, and the contrasting experience they had conducting team and individual reviews. We found team reviews generated better quality feedback in comparison to individual reviews. Furthermore, students preferred conducting reviews in teams, finding the process 'fun' and engaging. We observed several learning benefits of using collaboration in reviewing including improved understanding of the assessment criteria, roles, expectations, and increased team reflection. These results provide insight into how to improve the review process for instructors and researchers, and forms a basis for future research work in this area. With respect to facilitating peer review process in design based classrooms, we also present recommendations for creating effective review system design and implementation in classroom supported by research and practical experience.
Automated design of degenerate codon libraries.
Mena, Marco A; Daugherty, Patrick S
2005-12-01
Degenerate codon libraries are frequently used in protein engineering and evolution studies but are often limited to targeting a small number of positions to adequately limit the search space. To mitigate this, codon degeneracy can be limited using heuristics or previous knowledge of the targeted positions. To automate design of libraries given a set of amino acid sequences, an algorithm (LibDesign) was developed that generates a set of possible degenerate codon libraries, their resulting size, and their score relative to a user-defined scoring function. A gene library of a specified size can then be constructed that is representative of the given amino acid distribution or that includes specific sequences or combinations thereof. LibDesign provides a new tool for automated design of high-quality protein libraries that more effectively harness existing sequence-structure information derived from multiple sequence alignment or computational protein design data.
Spotlight-8 Image Analysis Software
NASA Technical Reports Server (NTRS)
Klimek, Robert; Wright, Ted
2006-01-01
Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.
Donlan, Chris; Cowan, Richard; Newton, Elizabeth J; Lloyd, Delyth
2007-04-01
A sample (n=48) of eight-year-olds with specific language impairments is compared with age-matched (n=55) and language matched controls (n=55) on a range of tasks designed to test the interdependence of language and mathematical development. Performance across tasks varies substantially in the SLI group, showing profound deficits in production of the count word sequence and basic calculation and significant deficits in understanding of the place-value principle in Hindu-Arabic notation. Only in understanding of arithmetic principles does SLI performance approximate that of age-matched-controls, indicating that principled understanding can develop even where number sequence production and other aspects of number processing are severely compromised.
Active Learning in a Large General Physics Classroom.
NASA Astrophysics Data System (ADS)
Trousil, Rebecca
2008-04-01
In 2004, we launched a new calculus-based, introductory physics sequence at Washington University. Designed as an alternative to our traditional lecture-based sequence, the primary objectives for this new course were to actively engage students in the learning process, to significantly strengthen students' conceptual reasoning skills, to help students develop higher level quantitative problem solving skills necessary for analyzing ``real world'' problems, and to integrate modern physics into the curriculum. This talk will describe our approach, using The Six Ideas That Shaped Physics text by Thomas Moore, to creating an active learning environment in large classes as well as share our perspective on key elements for success and challenges that we face in the large class environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Xiang; Zhang, Shuai; Jiao, Fang
Two-step nucleation pathways in which disordered, amorphous, or dense liquid states precede appearance of crystalline phases have been reported for a wide range of materials, but the dynamics of such pathways are poorly understood. Moreover, whether these pathways are general features of crystallizing systems or a consequence of system-specific structural details that select for direct vs two-step processes is unknown. Using atomic force microscopy to directly observe crystallization of sequence-defined polymers, we show that crystallization pathways are indeed sequence dependent. When a short hydrophobic region is added to a sequence that directly forms crystalline particles, crystallization instead follows a two-stepmore » pathway that begins with creation of disordered clusters of 10-20 molecules and is characterized by highly non-linear crystallization kinetics in which clusters transform into ordered structures that then enter the growth phase. The results shed new light on non-classical crystallization mechanisms and have implications for design of self-assembling polymer systems.« less
Visual evoked potentials and selective attention to points in space
NASA Technical Reports Server (NTRS)
Van Voorhis, S.; Hillyard, S. A.
1977-01-01
Visual evoked potentials (VEPs) were recorded to sequences of flashes delivered to the right and left visual fields while subjects responded promptly to designated stimuli in one field at a time (focused attention), in both fields at once (divided attention), or to neither field (passive). Three stimulus schedules were used: the first was a replication of a previous study (Eason, Harter, and White, 1969) where left- and right-field flashes were delivered quasi-independently, while in the other two the flashes were delivered to the two fields in random order (Bernoulli sequence). VEPs to attended-field stimuli were enhanced at both occipital (O2) and central (Cz) recording sites under all stimulus sequences, but different components were affected at the two scalp sites. It was suggested that the VEP at O2 may reflect modality-specific processing events, while the response at Cz, like its auditory homologue, may index more general aspects of selective attention.
Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver
2008-06-01
The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).
Wu, Yushu; Wang, Lei; Jiang, Wei
2017-03-15
Sensitive detection of uracil-DNA glycosylase (UDG) activity is beneficial for evaluating the repairing process of DNA lesions. Here, toehold-mediated strand displacement reaction (TSDR)-dependent fluorescent strategy was constructed for sensitive detection of UDG activity. A single-stranded DNA (ssDNA) probe with two uracil bases and a trigger sequence were designed. A hairpin probe with toehold domain was designed, and a reporter probe was also designed. Under the action of UDG, two uracil bases were removed from ssDNA probe, generating apurinic/apyrimidinic (AP) sites. Then, the AP sites could inhibit the TSDR between ssDNA probe and hairpin probe, leaving the trigger sequence in ssDNA probe still free. Subsequently, the trigger sequence was annealed with the reporter probe, initiating the polymerization and nicking amplification reaction. As a result, numerous G-quadruplex (G4) structures were formed, which could bind with N-methyl-mesoporphyrin IX (NMM) to generate enhanced fluorescent signal. In the absence of UDG, the ssDNA probe could hybridize with the toehold domain of the hairpin probe to initiate TSDR, blocking the trigger sequence, and then the subsequent amplification reaction would not occur. The proposed strategy was successfully implemented for detecting UDG activity with a detection limit of 2.7×10 -5 U/mL. Moreover, the strategy could distinguish UDG well from other interference enzymes. Furthermore, the strategy was also applied for detecting UDG activity in HeLa cells lysate with low effect of cellular components. These results indicated that the proposed strategy offered a promising tool for sensitive quantification of UDG activity in UDG-related function study and disease prognosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Sanchez-Lite, Alberto; Garcia, Manuel; Domingo, Rosario; Angel Sebastian, Miguel
2013-01-01
Background Musculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle. Methodology/Principal Findings This paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process. Conclusions The method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method’s usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method. PMID:23977340
High-volume workflow management in the ITN/FBI system
NASA Astrophysics Data System (ADS)
Paulson, Thomas L.
1997-02-01
The Identification Tasking and Networking (ITN) Federal Bureau of Investigation system will manage the processing of more than 70,000 submissions per day. The workflow manager controls the routing of each submission through a combination of automated and manual processing steps whose exact sequence is dynamically determined by the results at each step. For most submissions, one or more of the steps involve the visual comparison of fingerprint images. The ITN workflow manager is implemented within a scaleable client/server architecture. The paper describes the key aspects of the ITN workflow manager design which allow the high volume of daily processing to be successfully accomplished.
Hirsch, B; Endris, V; Lassmann, S; Weichert, W; Pfarr, N; Schirmacher, P; Kovaleva, V; Werner, M; Bonzheim, I; Fend, F; Sperveslage, J; Kaulich, K; Zacher, A; Reifenberger, G; Köhrer, K; Stepanow, S; Lerke, S; Mayr, T; Aust, D E; Baretton, G; Weidner, S; Jung, A; Kirchner, T; Hansmann, M L; Burbat, L; von der Wall, E; Dietel, M; Hummel, M
2018-04-01
The simultaneous detection of multiple somatic mutations in the context of molecular diagnostics of cancer is frequently performed by means of amplicon-based targeted next-generation sequencing (NGS). However, only few studies are available comparing multicenter testing of different NGS platforms and gene panels. Therefore, seven partner sites of the German Cancer Consortium (DKTK) performed a multicenter interlaboratory trial for targeted NGS using the same formalin-fixed, paraffin-embedded (FFPE) specimen of molecularly pre-characterized tumors (n = 15; each n = 5 cases of Breast, Lung, and Colon carcinoma) and a colorectal cancer cell line DNA dilution series. Detailed information regarding pre-characterized mutations was not disclosed to the partners. Commercially available and custom-designed cancer gene panels were used for library preparation and subsequent sequencing on several devices of two NGS different platforms. For every case, centrally extracted DNA and FFPE tissue sections for local processing were delivered to each partner site to be sequenced with the commercial gene panel and local bioinformatics. For cancer-specific panel-based sequencing, only centrally extracted DNA was analyzed at seven sequencing sites. Subsequently, local data were compiled and bioinformatics was performed centrally. We were able to demonstrate that all pre-characterized mutations were re-identified correctly, irrespective of NGS platform or gene panel used. However, locally processed FFPE tissue sections disclosed that the DNA extraction method can affect the detection of mutations with a trend in favor of magnetic bead-based DNA extraction methods. In conclusion, targeted NGS is a very robust method for simultaneous detection of various mutations in FFPE tissue specimens if certain pre-analytical conditions are carefully considered.
Dynamics of Preferential Substrate Recognition in HIV-1 Protease: Redefining the Substrate Envelope
Özen, Ayşegül; Haliloğlu, Türkan; Schiffer, Celia A.
2011-01-01
HIV-1 protease (PR) permits viral maturation by processing the Gag and Gag-Pro-Pol polyproteins. Though HIV-1 PR inhibitors (PIs) are used in combination antiviral therapy, the emergence of drug resistance has limited their efficacy. The rapid evolution of HIV-1 necessitates the consideration of drug resistance in novel drug-design strategies. Drug-resistant HIV-1 PR variants, while no longer efficiently inhibited, continue to efficiently hydrolyze the natural viral substrates. Though highly diverse in sequence, the HIV-1 PR substrates bind in a conserved three-dimensional shape we defined as the “substrate envelope”. We previously showed that resistance mutations arise where PIs protrude beyond the substrate envelope, as these regions are crucial for drug binding but not for substrate recognition. Here, we extend this model by considering the role of protein dynamics in the interaction of HIV-1 PR with its substrates. Seven molecular dynamics simulations of PR-substrate complexes were performed to estimate the conformational flexibility of substrates in their complexes. Interdependency of the substrate-protease interactions may compensate for the variations in cleavage-site sequences, and explain how a diverse set of sequences can be recognized as substrates by the same enzyme. This diversity may be essential for regulating sequential processing of substrates. We also define a dynamic substrate envelope as a more accurate representation of PR-substrate interactions. This dynamic substrate envelope, described by a probability distribution function, is a powerful tool for drug design efforts targeting ensembles of resistant HIV-1 PR variants with the aim of developing drugs that are less susceptible to resistance. PMID:21762811
SlideSort: all pairs similarity search for short reads
Shimizu, Kana; Tsuda, Koji
2011-01-01
Motivation: Recent progress in DNA sequencing technologies calls for fast and accurate algorithms that can evaluate sequence similarity for a huge amount of short reads. Searching similar pairs from a string pool is a fundamental process of de novo genome assembly, genome-wide alignment and other important analyses. Results: In this study, we designed and implemented an exact algorithm SlideSort that finds all similar pairs from a string pool in terms of edit distance. Using an efficient pattern growth algorithm, SlideSort discovers chains of common k-mers to narrow down the search. Compared to existing methods based on single k-mers, our method is more effective in reducing the number of edit distance calculations. In comparison to backtracking methods such as BWA, our method is much faster in finding remote matches, scaling easily to tens of millions of sequences. Our software has an additional function of single link clustering, which is useful in summarizing short reads for further processing. Availability: Executable binary files and C++ libraries are available at http://www.cbrc.jp/~shimizu/slidesort/ for Linux and Windows. Contact: slidesort@m.aist.go.jp; shimizu-kana@aist.go.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21148542
GENESUS: a two-step sequence design program for DNA nanostructure self-assembly.
Tsutsumi, Takanobu; Asakawa, Takeshi; Kanegami, Akemi; Okada, Takao; Tahira, Tomoko; Hayashi, Kenshi
2014-01-01
DNA has been recognized as an ideal material for bottom-up construction of nanometer scale structures by self-assembly. The generation of sequences optimized for unique self-assembly (GENESUS) program reported here is a straightforward method for generating sets of strand sequences optimized for self-assembly of arbitrarily designed DNA nanostructures by a generate-candidates-and-choose-the-best strategy. A scalable procedure to prepare single-stranded DNA having arbitrary sequences is also presented. Strands for the assembly of various structures were designed and successfully constructed, validating both the program and the procedure.
2010-01-01
Background Cultivated strawberry is a hybrid octoploid species (Fragaria xananassa Duchesne ex. Rozier) whose fruit is highly appreciated due to its organoleptic properties and health benefits. Despite recent studies on the control of its growth and ripening processes, information about the role played by different hormones on these processes remains elusive. Further advancement of this knowledge is hampered by the limited sequence information on genes from this species, despite the abundant information available on genes from the wild diploid relative Fragaria vesca. However, the diploid species, or one ancestor, only partially contributes to the genome of the cultivated octoploid. We have produced a collection of expressed sequence tags (ESTs) from different cDNA libraries prepared from different fruit parts and developmental stages. The collection has been analysed and the sequence information used to explore the involvement of different hormones in fruit developmental processes, and for the comparison of transcripts in the receptacle of ripe fruits of diploid and octoploid species. The study is particularly important since the commercial fruit is indeed an enlarged flower receptacle with the true fruits, the achenes, on the surface and connected through a network of vascular vessels to the central pith. Results We have sequenced over 4,500 ESTs from Fragaria xananassa, thus doubling the number of ESTs available in the GenBank of this species. We then assembled this information together with that available from F. xananassa resulting a total of 7,096 unigenes. The identification of SSRs and SNPs in many of the ESTs allowed their conversion into functional molecular markers. The availability of libraries prepared from green growing fruits has allowed the cloning of cDNAs encoding for genes of auxin, ethylene and brassinosteroid signalling processes, followed by expression studies in selected fruit parts and developmental stages. In addition, the sequence information generated in the project, jointly with previous information on sequences from both F. xananassa and F. vesca, has allowed designing an oligo-based microarray that has been used to compare the transcriptome of the ripe receptacle of the diploid and octoploid species. Comparison of the transcriptomes, grouping the genes by biological processes, points to differences being quantitative rather than qualitative. Conclusions The present study generates essential knowledge and molecular tools that will be useful in improving investigations at the molecular level in cultivated strawberry (F. xananassa). This knowledge is likely to provide useful resources in the ongoing breeding programs. The sequence information has already allowed the development of molecular markers that have been applied to germplasm characterization and could be eventually used in QTL analysis. Massive transcription analysis can be of utility to target specific genes to be further studied, by their involvement in the different plant developmental processes. PMID:20849591
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
Subsequent to the design review, a series of tests was conducted on simulated modules to demonstrate that all environmental specifications (wind loading, hailstone impact, thermal cycling, and humidity cycling) are satisfied by the design. All tests, except hailstone impact, were successfully completed. The assembly sequence was simplified by virtue of eliminating the frame components and assembly steps. Performance was improved by reducing the module edge border required to accommodate the frame of the preliminary design module. An ultrasonic rolling spot bonding technique was selected for use in the machine to perform the aluminum interconnect to cell metallization electrical joints required in the MEPSDU module configuration. This selection was based on extensive experimental tests and economic analyses.
Long-read sequencing data analysis for yeasts.
Yue, Jia-Xing; Liti, Gianni
2018-06-01
Long-read sequencing technologies have become increasingly popular due to their strengths in resolving complex genomic regions. As a leading model organism with small genome size and great biotechnological importance, the budding yeast Saccharomyces cerevisiae has many isolates currently being sequenced with long reads. However, analyzing long-read sequencing data to produce high-quality genome assembly and annotation remains challenging. Here, we present a modular computational framework named long-read sequencing data analysis for yeasts (LRSDAY), the first one-stop solution that streamlines this process. Starting from the raw sequencing reads, LRSDAY can produce chromosome-level genome assembly and comprehensive genome annotation in a highly automated manner with minimal manual intervention, which is not possible using any alternative tool available to date. The annotated genomic features include centromeres, protein-coding genes, tRNAs, transposable elements (TEs), and telomere-associated elements. Although tailored for S. cerevisiae, we designed LRSDAY to be highly modular and customizable, making it adaptable to virtually any eukaryotic organism. When applying LRSDAY to an S. cerevisiae strain, it takes ∼41 h to generate a complete and well-annotated genome from ∼100× Pacific Biosciences (PacBio) running the basic workflow with four threads. Basic experience working within the Linux command-line environment is recommended for carrying out the analysis using LRSDAY.
Vasudevan, John M; Logan, Andrew; Shultz, Rebecca; Koval, Jeffrey J; Roh, Eugene Y; Fredericson, Michael
2016-01-01
Aim. The purpose of this pilot study is to use surface electromyography to determine an individual athlete's typical muscle onset activation sequence when performing a golf or tennis forward swing and to use the method to assess to what degree the sequence is reproduced with common conditioning exercises and a machine designed for this purpose. Methods. Data for 18 healthy male subjects were collected for 15 muscles of the trunk and lower extremities. Data were filtered and processed to determine the average onset of muscle activation for each motion. A Spearman correlation estimated congruence of activation order between the swing and each exercise. Correlations of each group were pooled with 95% confidence intervals using a random effects meta-analytic strategy. Results. The averaged sequences differed among each athlete tested, but pooled correlations demonstrated a positive association between each exercise and the participants' natural muscle onset activation sequence. Conclusion. The selected training exercises and Turning Point™ device all partially reproduced our athletes' averaged muscle onset activation sequences for both sports. The results support consideration of a larger, adequately powered study using this method to quantify to what degree each of the selected exercises is appropriate for use in both golf and tennis.
Shultz, Rebecca; Fredericson, Michael
2016-01-01
Aim. The purpose of this pilot study is to use surface electromyography to determine an individual athlete's typical muscle onset activation sequence when performing a golf or tennis forward swing and to use the method to assess to what degree the sequence is reproduced with common conditioning exercises and a machine designed for this purpose. Methods. Data for 18 healthy male subjects were collected for 15 muscles of the trunk and lower extremities. Data were filtered and processed to determine the average onset of muscle activation for each motion. A Spearman correlation estimated congruence of activation order between the swing and each exercise. Correlations of each group were pooled with 95% confidence intervals using a random effects meta-analytic strategy. Results. The averaged sequences differed among each athlete tested, but pooled correlations demonstrated a positive association between each exercise and the participants' natural muscle onset activation sequence. Conclusion. The selected training exercises and Turning Point™ device all partially reproduced our athletes' averaged muscle onset activation sequences for both sports. The results support consideration of a larger, adequately powered study using this method to quantify to what degree each of the selected exercises is appropriate for use in both golf and tennis. PMID:27403454
Sequence features of viral and human Internal Ribosome Entry Sites predictive of their activity
Elias-Kirma, Shani; Nir, Ronit; Segal, Eran
2017-01-01
Translation of mRNAs through Internal Ribosome Entry Sites (IRESs) has emerged as a prominent mechanism of cellular and viral initiation. It supports cap-independent translation of select cellular genes under normal conditions, and in conditions when cap-dependent translation is inhibited. IRES structure and sequence are believed to be involved in this process. However due to the small number of IRESs known, there have been no systematic investigations of the determinants of IRES activity. With the recent discovery of thousands of novel IRESs in human and viruses, the next challenge is to decipher the sequence determinants of IRES activity. We present the first in-depth computational analysis of a large body of IRESs, exploring RNA sequence features predictive of IRES activity. We identified predictive k-mer features resembling IRES trans-acting factor (ITAF) binding motifs across human and viral IRESs, and found that their effect on expression depends on their sequence, number and position. Our results also suggest that the architecture of retroviral IRESs differs from that of other viruses, presumably due to their exposure to the nuclear environment. Finally, we measured IRES activity of synthetically designed sequences to confirm our prediction of increasing activity as a function of the number of short IRES elements. PMID:28922394
Sequence similarity is more relevant than species specificity in probabilistic backtranslation.
Ferro, Alfredo; Giugno, Rosalba; Pigola, Giuseppe; Pulvirenti, Alfredo; Di Pietro, Cinzia; Purrello, Michele; Ragusa, Marco
2007-02-21
Backtranslation is the process of decoding a sequence of amino acids into the corresponding codons. All synthetic gene design systems include a backtranslation module. The degeneracy of the genetic code makes backtranslation potentially ambiguous since most amino acids are encoded by multiple codons. The common approach to overcome this difficulty is based on imitation of codon usage within the target species. This paper describes EasyBack, a new parameter-free, fully-automated software for backtranslation using Hidden Markov Models. EasyBack is not based on imitation of codon usage within the target species, but instead uses a sequence-similarity criterion. The model is trained with a set of proteins with known cDNA coding sequences, constructed from the input protein by querying the NCBI databases with BLAST. Unlike existing software, the proposed method allows the quality of prediction to be estimated. When tested on a group of proteins that show different degrees of sequence conservation, EasyBack outperforms other published methods in terms of precision. The prediction quality of a protein backtranslation methis markedly increased by replacing the criterion of most used codon in the same species with a Hidden Markov Model trained with a set of most similar sequences from all species. Moreover, the proposed method allows the quality of prediction to be estimated probabilistically.
CisSERS: Customizable in silico sequence evaluation for restriction sites
Sharpe, Richard M.; Koepke, Tyson; Harper, Artemus; ...
2016-04-12
High-throughput sequencing continues to produce an immense volume of information that is processed and assembled into mature sequence data. Here, data analysis tools are urgently needed that leverage the embedded DNA sequence polymorphisms and consequent changes to restriction sites or sequence motifs in a high-throughput manner to enable biological experimentation. CisSERS was developed as a standalone open source tool to analyze sequence datasets and provide biologists with individual or comparative genome organization information in terms of presence and frequency of patterns or motifs such as restriction enzymes. Predicted agarose gel visualization of the custom analyses results was also integrated tomore » enhance the usefulness of the software. CisSERS offers several novel functionalities, such as handling of large and multiple datasets in parallel, multiple restriction enzyme site detection and custom motif detection features, which are seamlessly integrated with real time agarose gel visualization. Using a simple fasta-formatted file as input, CisSERS utilizes the REBASE enzyme database. Results from CisSERSenable the user to make decisions for designing genotyping by sequencing experiments, reduced representation sequencing, 3’UTR sequencing, and cleaved amplified polymorphic sequence (CAPS) molecular markers for large sample sets. CisSERS is a java based graphical user interface built around a perl backbone. Several of the applications of CisSERS including CAPS molecular marker development were successfully validated using wet-lab experimentation. Here, we present the tool CisSERSand results from in-silico and corresponding wet-lab analyses demonstrating that CisSERS is a technology platform solution that facilitates efficient data utilization in genomics and genetics studies.« less
CisSERS: Customizable in silico sequence evaluation for restriction sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, Richard M.; Koepke, Tyson; Harper, Artemus
High-throughput sequencing continues to produce an immense volume of information that is processed and assembled into mature sequence data. Here, data analysis tools are urgently needed that leverage the embedded DNA sequence polymorphisms and consequent changes to restriction sites or sequence motifs in a high-throughput manner to enable biological experimentation. CisSERS was developed as a standalone open source tool to analyze sequence datasets and provide biologists with individual or comparative genome organization information in terms of presence and frequency of patterns or motifs such as restriction enzymes. Predicted agarose gel visualization of the custom analyses results was also integrated tomore » enhance the usefulness of the software. CisSERS offers several novel functionalities, such as handling of large and multiple datasets in parallel, multiple restriction enzyme site detection and custom motif detection features, which are seamlessly integrated with real time agarose gel visualization. Using a simple fasta-formatted file as input, CisSERS utilizes the REBASE enzyme database. Results from CisSERSenable the user to make decisions for designing genotyping by sequencing experiments, reduced representation sequencing, 3’UTR sequencing, and cleaved amplified polymorphic sequence (CAPS) molecular markers for large sample sets. CisSERS is a java based graphical user interface built around a perl backbone. Several of the applications of CisSERS including CAPS molecular marker development were successfully validated using wet-lab experimentation. Here, we present the tool CisSERSand results from in-silico and corresponding wet-lab analyses demonstrating that CisSERS is a technology platform solution that facilitates efficient data utilization in genomics and genetics studies.« less
GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.
Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun
2013-01-01
As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.
E2FM: an encrypted and compressed full-text index for collections of genomic sequences.
Montecuollo, Ferdinando; Schmid, Giovannni; Tagliaferri, Roberto
2017-09-15
Next Generation Sequencing (NGS) platforms and, more generally, high-throughput technologies are giving rise to an exponential growth in the size of nucleotide sequence databases. Moreover, many emerging applications of nucleotide datasets-as those related to personalized medicine-require the compliance with regulations about the storage and processing of sensitive data. We have designed and carefully engineered E 2 FM -index, a new full-text index in minute space which was optimized for compressing and encrypting nucleotide sequence collections in FASTA format and for performing fast pattern-search queries. E 2 FM -index allows to build self-indexes which occupy till to 1/20 of the storage required by the input FASTA file, thus permitting to save about 95% of storage when indexing collections of highly similar sequences; moreover, it can exactly search the built indexes for patterns in times ranging from few milliseconds to a few hundreds milliseconds, depending on pattern length. Source code is available at https://github.com/montecuollo/E2FM . ferdinando.montecuollo@unicampania.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Droplet-based pyrosequencing using digital microfluidics.
Boles, Deborah J; Benton, Jonathan L; Siew, Germaine J; Levy, Miriam H; Thwar, Prasanna K; Sandahl, Melissa A; Rouse, Jeremy L; Perkins, Lisa C; Sudarsan, Arjun P; Jalili, Roxana; Pamula, Vamsee K; Srinivasan, Vijay; Fair, Richard B; Griffin, Peter B; Eckhardt, Allen E; Pollack, Michael G
2011-11-15
The feasibility of implementing pyrosequencing chemistry within droplets using electrowetting-based digital microfluidics is reported. An array of electrodes patterned on a printed-circuit board was used to control the formation, transportation, merging, mixing, and splitting of submicroliter-sized droplets contained within an oil-filled chamber. A three-enzyme pyrosequencing protocol was implemented in which individual droplets contained enzymes, deoxyribonucleotide triphosphates (dNTPs), and DNA templates. The DNA templates were anchored to magnetic beads which enabled them to be thoroughly washed between nucleotide additions. Reagents and protocols were optimized to maximize signal over background, linearity of response, cycle efficiency, and wash efficiency. As an initial demonstration of feasibility, a portion of a 229 bp Candida parapsilosis template was sequenced using both a de novo protocol and a resequencing protocol. The resequencing protocol generated over 60 bp of sequence with 100% sequence accuracy based on raw pyrogram levels. Excellent linearity was observed for all of the homopolymers (two, three, or four nucleotides) contained in the C. parapsilosis sequence. With improvements in microfluidic design it is expected that longer reads, higher throughput, and improved process integration (i.e., "sample-to-sequence" capability) could eventually be achieved using this low-cost platform.
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
Advances in understanding tumour evolution through single-cell sequencing.
Kuipers, Jack; Jahn, Katharina; Beerenwinkel, Niko
2017-04-01
The mutational heterogeneity observed within tumours poses additional challenges to the development of effective cancer treatments. A thorough understanding of a tumour's subclonal composition and its mutational history is essential to open up the design of treatments tailored to individual patients. Comparative studies on a large number of tumours permit the identification of mutational patterns which may refine forecasts of cancer progression, response to treatment and metastatic potential. The composition of tumours is shaped by evolutionary processes. Recent advances in next-generation sequencing offer the possibility to analyse the evolutionary history and accompanying heterogeneity of tumours at an unprecedented resolution, by sequencing single cells. New computational challenges arise when moving from bulk to single-cell sequencing data, leading to the development of novel modelling frameworks. In this review, we present the state of the art methods for understanding the phylogeny encoded in bulk or single-cell sequencing data, and highlight future directions for developing more comprehensive and informative pictures of tumour evolution. This article is part of a Special Issue entitled: Evolutionary principles - heterogeneity in cancer?, edited by Dr. Robert A. Gatenby. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Extensive gene conversion at the PMS2 DNA mismatch repair locus.
Hayward, Bruce E; De Vos, Michel; Valleley, Elizabeth M A; Charlton, Ruth S; Taylor, Graham R; Sheridan, Eamonn; Bonthron, David T
2007-05-01
Mutations of the PMS2 DNA repair gene predispose to a characteristic range of malignancies, with either childhood onset (when both alleles are mutated) or a partially penetrant adult onset (if heterozygous). These mutations have been difficult to detect, due to interference from a family of pseudogenes located on chromosome 7. One of these, the PMS2CL pseudogene, lies within a 100-kb inverted duplication (inv dup), 700 kb centromeric to PMS2 itself on 7p22. Here, we show that the reference genomic sequences cannot be relied upon to distinguish PMS2 from PMS2CL, because of sequence transfer between the two loci. The 7p22 inv dup occurred prior to the divergence of modern ape species (15 million years ago [Mya]), but has undergone extensive sequence homogenization. This process appears to be ongoing, since there is considerable allelic diversity within the duplicated region, much of it derived from sequence exchange between PMS2 and PMS2CL. This sequence diversity can result in both false-positive and false-negative mutation analysis at this locus. Great caution is still needed in the design and interpretation of PMS2 mutation screens. 2007 Wiley-Liss, Inc.
Hall, Miquette; Chattaway, Marie A.; Reuter, Sandra; Savin, Cyril; Strauch, Eckhard; Carniel, Elisabeth; Connor, Thomas; Van Damme, Inge; Rajakaruna, Lakshani; Rajendram, Dunstan; Jenkins, Claire; Thomson, Nicholas R.
2014-01-01
The genus Yersinia is a large and diverse bacterial genus consisting of human-pathogenic species, a fish-pathogenic species, and a large number of environmental species. Recently, the phylogenetic and population structure of the entire genus was elucidated through the genome sequence data of 241 strains encompassing every known species in the genus. Here we report the mining of this enormous data set to create a multilocus sequence typing-based scheme that can identify Yersinia strains to the species level to a level of resolution equal to that for whole-genome sequencing. Our assay is designed to be able to accurately subtype the important human-pathogenic species Yersinia enterocolitica to whole-genome resolution levels. We also report the validation of the scheme on 386 strains from reference laboratory collections across Europe. We propose that the scheme is an important molecular typing system to allow accurate and reproducible identification of Yersinia isolates to the species level, a process often inconsistent in nonspecialist laboratories. Additionally, our assay is the most phylogenetically informative typing scheme available for Y. enterocolitica. PMID:25339391
Hennig, Bianca P.; Velten, Lars; Racke, Ines; Tu, Chelsea Szu; Thoms, Matthias; Rybin, Vladimir; Besir, Hüseyin; Remans, Kim; Steinmetz, Lars M.
2017-01-01
Efficient preparation of high-quality sequencing libraries that well represent the biological sample is a key step for using next-generation sequencing in research. Tn5 enables fast, robust, and highly efficient processing of limited input material while scaling to the parallel processing of hundreds of samples. Here, we present a robust Tn5 transposase purification strategy based on an N-terminal His6-Sumo3 tag. We demonstrate that libraries prepared with our in-house Tn5 are of the same quality as those processed with a commercially available kit (Nextera XT), while they dramatically reduce the cost of large-scale experiments. We introduce improved purification strategies for two versions of the Tn5 enzyme. The first version carries the previously reported point mutations E54K and L372P, and stably produces libraries of constant fragment size distribution, even if the Tn5-to-input molecule ratio varies. The second Tn5 construct carries an additional point mutation (R27S) in the DNA-binding domain. This construct allows for adjustment of the fragment size distribution based on enzyme concentration during tagmentation, a feature that opens new opportunities for use of Tn5 in customized experimental designs. We demonstrate the versatility of our Tn5 enzymes in different experimental settings, including a novel single-cell polyadenylation site mapping protocol as well as ultralow input DNA sequencing. PMID:29118030
Cai, Q; Storey, K B
1997-08-01
The present study identifies a previously cloned cDNA, pBTaR914, as homologous to the mitochondrial WANCY (tryptophan, alanine, asparagine, cysteine, and tyrosine) tRNA gene cluster. This cDNA clone has a 304-bp sequence and its homologue, pBTaR09, has a 158-bp sequence with a long poly(A)+ tail (more than 60 adenosines). RNA blotting analysis using pBTaR914 probe against the total RNA from the tissues of adult and hatchling turtles revealed five bands: 540, 1800, 2200, 3200, and 3900 nucleotides (nt). The 540-nt transcript is considered to be an intact mtRNA unit from a novel mtDNA gene designated WANCYHP that overlaps the WANCY tRNA gene cluster region. This transcript was highly induced by both anoxic and freezing stresses in turtle heart. The other transcripts are considered to be the processed intermediates of mtRNA transcripts with WANCYHP sequence. All these transcripts were differentially regulated by anoxia and freezing in different organs. The data suggest that mtRNA processing is sensitive to regulation by external stresses, oxygen deprivation, and freezing. Furthermore, the fact that the WANCYHP transcript is highly induced during anoxic exposure suggests that it may play an important role in the regulation of mitochondrial activities to coordinate the physiological adaptation to anoxia.
Bae, Daeryeong; Kim, Shino; Lee, Wonoh; Yi, Jin Woo; Um, Moon Kwang; Seong, Dong Gi
2018-05-21
A fast-cure carbon fiber/epoxy prepreg was thermoformed against a replicated automotive roof panel mold (square-cup) to investigate the effect of the stacking sequence of prepreg layers with unidirectional and plane woven fabrics and mold geometry with different drawing angles and depths on the fiber deformation and formability of the prepreg. The optimum forming condition was determined via analysis of the material properties of epoxy resin. The non-linear mechanical properties of prepreg at the deformation modes of inter- and intra-ply shear, tensile and bending were measured to be used as input data for the commercial virtual forming simulation software. The prepreg with a stacking sequence containing the plain-woven carbon prepreg on the outer layer of the laminate was successfully thermoformed against a mold with a depth of 20 mm and a tilting angle of 110°. Experimental results for the shear deformations at each corner of the thermoformed square-cup product were compared with the simulation and a similarity in the overall tendency of the shear angle in the path at each corner was observed. The results are expected to contribute to the optimization of parameters on materials, mold design and processing in the thermoforming mass-production process for manufacturing high quality automotive parts with a square-cup geometry.
Bae, Daeryeong; Kim, Shino; Lee, Wonoh; Yi, Jin Woo; Um, Moon Kwang; Seong, Dong Gi
2018-01-01
A fast-cure carbon fiber/epoxy prepreg was thermoformed against a replicated automotive roof panel mold (square-cup) to investigate the effect of the stacking sequence of prepreg layers with unidirectional and plane woven fabrics and mold geometry with different drawing angles and depths on the fiber deformation and formability of the prepreg. The optimum forming condition was determined via analysis of the material properties of epoxy resin. The non-linear mechanical properties of prepreg at the deformation modes of inter- and intra-ply shear, tensile and bending were measured to be used as input data for the commercial virtual forming simulation software. The prepreg with a stacking sequence containing the plain-woven carbon prepreg on the outer layer of the laminate was successfully thermoformed against a mold with a depth of 20 mm and a tilting angle of 110°. Experimental results for the shear deformations at each corner of the thermoformed square-cup product were compared with the simulation and a similarity in the overall tendency of the shear angle in the path at each corner was observed. The results are expected to contribute to the optimization of parameters on materials, mold design and processing in the thermoforming mass-production process for manufacturing high quality automotive parts with a square-cup geometry. PMID:29883413
Rincon, Sergio A; Paoletti, Anne
2016-01-01
Unveiling the function of a novel protein is a challenging task that requires careful experimental design. Yeast cytokinesis is a conserved process that involves modular structural and regulatory proteins. For such proteins, an important step is to identify their domains and structural organization. Here we briefly discuss a collection of methods commonly used for sequence alignment and prediction of protein structure that represent powerful tools for the identification homologous domains and design of structure-function approaches to test experimentally the function of multi-domain proteins such as those implicated in yeast cytokinesis.
Integrated residential photovoltaic array development
NASA Astrophysics Data System (ADS)
Shepard, N. F., Jr.
1981-12-01
An advanced, universally-mountable, integrated residential photovoltaic array concept was defined based upon an in-depth formulation and evaluation of three candidate approaches which were synthesized from existing or proposed residential array concepts. The impact of module circuitry and process sequence is considered and technology gaps and performance drivers associated with residential photovoltaic array concepts are identified. The actual learning experience gained from the comparison of the problem areas of the hexagonal shingle design with the rectangular module design led to what is considered an advanced array concept. Building the laboratory mockup provided actual experience and the opportunity to uncover additional technology gaps.
Integrated residential photovoltaic array development
NASA Technical Reports Server (NTRS)
Shepard, N. F., Jr.
1981-01-01
An advanced, universally-mountable, integrated residential photovoltaic array concept was defined based upon an in-depth formulation and evaluation of three candidate approaches which were synthesized from existing or proposed residential array concepts. The impact of module circuitry and process sequence is considered and technology gaps and performance drivers associated with residential photovoltaic array concepts are identified. The actual learning experience gained from the comparison of the problem areas of the hexagonal shingle design with the rectangular module design led to what is considered an advanced array concept. Building the laboratory mockup provided actual experience and the opportunity to uncover additional technology gaps.
JPL-ANTOPT antenna structure optimization program
NASA Technical Reports Server (NTRS)
Strain, D. M.
1994-01-01
New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.
1981-02-01
or analysis IloduIls,* each pCr forming one specific step in the design or analysis process. These modules will be callable , in any logical sequence...tempt to 1)l 1cC Cind cut of I bar, hut Will slow the required steel area and bond r i u I rl- t t)s per I oot at Uitablt intervals across the base... bond strength) shall be as required in ACI 318-71 Chapter 12, except that computed shear V shall be multiplied by 2.0 and substituted for V u. Tn
Okura, Hiromichi; Takahashi, Tsuyoshi; Mihara, Hisakazu
2012-06-01
Successful approaches of de novo protein design suggest a great potential to create novel structural folds and to understand natural rules of protein folding. For these purposes, smaller and simpler de novo proteins have been developed. Here, we constructed smaller proteins by removing the terminal sequences from stable de novo vTAJ proteins and compared stabilities between mutant and original proteins. vTAJ proteins were screened from an α3β3 binary-patterned library which was designed with polar/ nonpolar periodicities of α-helix and β-sheet. vTAJ proteins have the additional terminal sequences due to the method of constructing the genetically repeated library sequences. By removing the parts of the sequences, we successfully obtained the stable smaller de novo protein mutants with fewer amino acid alphabets than the originals. However, these mutants showed the differences on ANS binding properties and stabilities against denaturant and pH change. The terminal sequences, which were designed just as flexible linkers not as secondary structure units, sufficiently affected these physicochemical details. This study showed implications for adjusting protein stabilities by designing N- and C-terminal sequences.
Halper, Sean M; Cetnar, Daniel P; Salis, Howard M
2018-01-01
Engineering many-enzyme metabolic pathways suffers from the design curse of dimensionality. There are an astronomical number of synonymous DNA sequence choices, though relatively few will express an evolutionary robust, maximally productive pathway without metabolic bottlenecks. To solve this challenge, we have developed an integrated, automated computational-experimental pipeline that identifies a pathway's optimal DNA sequence without high-throughput screening or many cycles of design-build-test. The first step applies our Operon Calculator algorithm to design a host-specific evolutionary robust bacterial operon sequence with maximally tunable enzyme expression levels. The second step applies our RBS Library Calculator algorithm to systematically vary enzyme expression levels with the smallest-sized library. After characterizing a small number of constructed pathway variants, measurements are supplied to our Pathway Map Calculator algorithm, which then parameterizes a kinetic metabolic model that ultimately predicts the pathway's optimal enzyme expression levels and DNA sequences. Altogether, our algorithms provide the ability to efficiently map the pathway's sequence-expression-activity space and predict DNA sequences with desired metabolic fluxes. Here, we provide a step-by-step guide to applying the Pathway Optimization Pipeline on a desired multi-enzyme pathway in a bacterial host.
Effects of pre- and pro-sequence of thaumatin on the secretion by Pichia pastoris.
Ide, Nobuyuki; Masuda, Tetsuya; Kitabatake, Naofumi
2007-11-23
Thaumatin is a 22-kDa sweet-tasting protein containing eight disulfide bonds. When thaumatin is expressed in Pichia pastoris using the thaumatin cDNA fused with both the alpha-factor signal sequence and the Kex2 protease cleavage site from Saccharomyces cerevisiae, the N-terminal sequence of the secreted thaumatin molecule is not processed correctly. To examine the role of the thaumatin cDNA-encoded N-terminal pre-sequence and C-terminal pro-sequence on the processing of thaumatin and efficiency of thaumatin production in P. pastoris, four expression plasmids with different pre-sequence and pro-sequence were constructed and transformed into P. pastoris. The transformants containing pre-thaumatin gene that has the native plant signal, secreted thaumatin molecules in the medium. The N-terminal amino acid sequence of the secreted thaumatin molecule was processed correctly. The production yield of thaumatin was not affected by the C-terminal pro-sequence, and the pro-sequence was not processed in P. pastoris, indicating that pro-sequence is not necessary for thaumatin synthesis.
Postel, Alexander; Schmeiser, Stefanie; Zimmermann, Bernd; Becher, Paul
2016-01-01
Molecular epidemiology has become an indispensable tool in the diagnosis of diseases and in tracing the infection routes of pathogens. Due to advances in conventional sequencing and the development of high throughput technologies, the field of sequence determination is in the process of being revolutionized. Platforms for sharing sequence information and providing standardized tools for phylogenetic analyses are becoming increasingly important. The database (DB) of the European Union (EU) and World Organisation for Animal Health (OIE) Reference Laboratory for classical swine fever offers one of the world’s largest semi-public virus-specific sequence collections combined with a module for phylogenetic analysis. The classical swine fever (CSF) DB (CSF-DB) became a valuable tool for supporting diagnosis and epidemiological investigations of this highly contagious disease in pigs with high socio-economic impacts worldwide. The DB has been re-designed and now allows for the storage and analysis of traditionally used, well established genomic regions and of larger genomic regions including complete viral genomes. We present an application example for the analysis of highly similar viral sequences obtained in an endemic disease situation and introduce the new geographic “CSF Maps” tool. The concept of this standardized and easy-to-use DB with an integrated genetic typing module is suited to serve as a blueprint for similar platforms for other human or animal viruses. PMID:27827988
ScaffoldSeq: Software for characterization of directed evolution populations.
Woldring, Daniel R; Holec, Patrick V; Hackel, Benjamin J
2016-07-01
ScaffoldSeq is software designed for the numerous applications-including directed evolution analysis-in which a user generates a population of DNA sequences encoding for partially diverse proteins with related functions and would like to characterize the single site and pairwise amino acid frequencies across the population. A common scenario for enzyme maturation, antibody screening, and alternative scaffold engineering involves naïve and evolved populations that contain diversified regions, varying in both sequence and length, within a conserved framework. Analyzing the diversified regions of such populations is facilitated by high-throughput sequencing platforms; however, length variability within these regions (e.g., antibody CDRs) encumbers the alignment process. To overcome this challenge, the ScaffoldSeq algorithm takes advantage of conserved framework sequences to quickly identify diverse regions. Beyond this, unintended biases in sequence frequency are generated throughout the experimental workflow required to evolve and isolate clones of interest prior to DNA sequencing. ScaffoldSeq software uniquely handles this issue by providing tools to quantify and remove background sequences, cluster similar protein families, and dampen the impact of dominant clones. The software produces graphical and tabular summaries for each region of interest, allowing users to evaluate diversity in a site-specific manner as well as identify epistatic pairwise interactions. The code and detailed information are freely available at http://research.cems.umn.edu/hackel. Proteins 2016; 84:869-874. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Genome analysis and identification of gelatinase encoded gene in Enterobacter aerogenes
NASA Astrophysics Data System (ADS)
Shahimi, Safiyyah; Mutalib, Sahilah Abdul; Khalid, Rozida Abdul; Repin, Rul Aisyah Mat; Lamri, Mohd Fadly; Bakar, Mohd Faizal Abu; Isa, Mohd Noor Mat
2016-11-01
In this study, bioinformatic analysis towards genome sequence of E. aerogenes was done to determine gene encoded for gelatinase. Enterobacter aerogenes was isolated from hot spring water and gelatinase species-specific bacterium to porcine and fish gelatin. This bacterium offers the possibility of enzymes production which is specific to both species gelatine, respectively. Enterobacter aerogenes was partially genome sequenced resulting in 5.0 mega basepair (Mbp) total size of sequence. From pre-process pipeline, 87.6 Mbp of total reads, 68.8 Mbp of total high quality reads and 78.58 percent of high quality percentage was determined. Genome assembly produced 120 contigs with 67.5% of contigs over 1 kilo base pair (kbp), 124856 bp of N50 contig length and 55.17 % of GC base content percentage. About 4705 protein gene was identified from protein prediction analysis. Two candidate genes selected have highest similarity identity percentage against gelatinase enzyme available in Swiss-Prot and NCBI online database. They were NODE_9_length_26866_cov_148.013245_12 containing 1029 base pair (bp) sequence with 342 amino acid sequence and NODE_24_length_155103_cov_177.082458_62 which containing 717 bp sequence with 238 amino acid sequence, respectively. Thus, two paired of primers (forward and reverse) were designed, based on the open reading frame (ORF) of selected genes. Genome analysis of E. aerogenes resulting genes encoded gelatinase were identified.
A direct biocombinatorial strategy toward next generation, mussel-glue inspired saltwater adhesives.
Wilke, Patrick; Helfricht, Nicolas; Mark, Andreas; Papastavrou, Georg; Faivre, Damien; Börner, Hans G
2014-09-10
Biological materials exhibit remarkable, purpose-adapted properties that provide a source of inspiration for designing new materials to meet the requirements of future applications. For instance, marine mussels are able to attach to a broad spectrum of hard surfaces under hostile conditions. Controlling wet-adhesion of synthetic macromolecules by analogue processes promises to strongly impact materials sciences by offering advanced coatings, adhesives, and glues. The de novo design of macromolecules to mimic complex aspects of mussel adhesion still constitutes a challenge. Phage display allows material scientists to design specifically interacting molecules with tailored affinity to material surfaces. Here, we report on the integration of enzymatic processing steps into phage display biopanning to expand the biocombinatorial procedure and enable the direct selection of enzymatically activable peptide adhesion domains. Adsorption isotherms and single molecule force spectroscopy show that those de novo peptides mimic complex aspects of bioadhesion, such as enzymatic activation (by tyrosinase), the switchability from weak to strong binders, and adsorption under hostile saltwater conditions. Furthermore, peptide-poly(ethylene oxide) conjugates are synthesized to generate protective coatings, which possess anti-fouling properties and suppress irreversible interactions with blood-plasma protein cocktails. The extended phage display procedure provides a generic way to non-natural peptide adhesion domains, which not only mimic nature but also improve biological sequence sections extractable from mussel-glue proteins. The de novo peptides manage to combine several tasks in a minimal 12-mer sequence and thus pave the way to overcome major challenges of technical wet glues.
Scholz, Matthew; Lo, Chien -Chi; Chain, Patrick S. G.
2014-10-01
Assembly of metagenomic samples is a very complex process, with algorithms designed to address sequencing platform-specific issues, (read length, data volume, and/or community complexity), while also faced with genomes that differ greatly in nucleotide compositional biases and in abundance. To address these issues, we have developed a post-assembly process: MetaGenomic Assembly by Merging (MeGAMerge). We compare this process to the performance of several assemblers, using both real, and in-silico generated samples of different community composition and complexity. MeGAMerge consistently outperforms individual assembly methods, producing larger contigs with an increased number of predicted genes, without replication of data. MeGAMerge contigs aremore » supported by read mapping and contig alignment data, when using synthetically-derived and real metagenomic data, as well as by gene prediction analyses and similarity searches. Ultimately, MeGAMerge is a flexible method that generates improved metagenome assemblies, with the ability to accommodate upcoming sequencing platforms, as well as present and future assembly algorithms.« less
Process in manufacturing high efficiency AlGaAs/GaAs solar cells by MO-CVD
NASA Technical Reports Server (NTRS)
Yeh, Y. C. M.; Chang, K. I.; Tandon, J.
1984-01-01
Manufacturing technology for mass producing high efficiency GaAs solar cells is discussed. A progress using a high throughput MO-CVD reactor to produce high efficiency GaAs solar cells is discussed. Thickness and doping concentration uniformity of metal oxide chemical vapor deposition (MO-CVD) GaAs and AlGaAs layer growth are discussed. In addition, new tooling designs are given which increase the throughput of solar cell processing. To date, 2cm x 2cm AlGaAs/GaAs solar cells with efficiency up to 16.5% were produced. In order to meet throughput goals for mass producing GaAs solar cells, a large MO-CVD system (Cambridge Instrument Model MR-200) with a susceptor which was initially capable of processing 20 wafers (up to 75 mm diameter) during a single growth run was installed. In the MR-200, the sequencing of the gases and the heating power are controlled by a microprocessor-based programmable control console. Hence, operator errors can be reduced, leading to a more reproducible production sequence.
Design Of Computer Based Test Using The Unified Modeling Language
NASA Astrophysics Data System (ADS)
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
NASA Technical Reports Server (NTRS)
Gladden, Roy
2007-01-01
Version 2.0 of the autogen software has been released. "Autogen" (automated sequence generation) signifies both a process and software used to implement the process of automated generation of sequences of commands in a standard format for uplink to spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes.
Automated design of genomic Southern blot probes
2010-01-01
Background Sothern blotting is a DNA analysis technique that has found widespread application in molecular biology. It has been used for gene discovery and mapping and has diagnostic and forensic applications, including mutation detection in patient samples and DNA fingerprinting in criminal investigations. Southern blotting has been employed as the definitive method for detecting transgene integration, and successful homologous recombination in gene targeting experiments. The technique employs a labeled DNA probe to detect a specific DNA sequence in a complex DNA sample that has been separated by restriction-digest and gel electrophoresis. Critically for the technique to succeed the probe must be unique to the target locus so as not to cross-hybridize to other endogenous DNA within the sample. Investigators routinely employ a manual approach to probe design. A genome browser is used to extract DNA sequence from the locus of interest, which is searched against the target genome using a BLAST-like tool. Ideally a single perfect match is obtained to the target, with little cross-reactivity caused by homologous DNA sequence present in the genome and/or repetitive and low-complexity elements in the candidate probe. This is a labor intensive process often requiring several attempts to find a suitable probe for laboratory testing. Results We have written an informatic pipeline to automatically design genomic Sothern blot probes that specifically attempts to optimize the resultant probe, employing a brute-force strategy of generating many candidate probes of acceptable length in the user-specified design window, searching all against the target genome, then scoring and ranking the candidates by uniqueness and repetitive DNA element content. Using these in silico measures we can automatically design probes that we predict to perform as well, or better, than our previous manual designs, while considerably reducing design time. We went on to experimentally validate a number of these automated designs by Southern blotting. The majority of probes we tested performed well confirming our in silico prediction methodology and the general usefulness of the software for automated genomic Southern probe design. Conclusions Software and supplementary information are freely available at: http://www.genes2cognition.org/software/southern_blot PMID:20113467
Prospects and limitations of full-text index structures in genome analysis
Vyverman, Michaël; De Baets, Bernard; Fack, Veerle; Dawyndt, Peter
2012-01-01
The combination of incessant advances in sequencing technology producing large amounts of data and innovative bioinformatics approaches, designed to cope with this data flood, has led to new interesting results in the life sciences. Given the magnitude of sequence data to be processed, many bioinformatics tools rely on efficient solutions to a variety of complex string problems. These solutions include fast heuristic algorithms and advanced data structures, generally referred to as index structures. Although the importance of index structures is generally known to the bioinformatics community, the design and potency of these data structures, as well as their properties and limitations, are less understood. Moreover, the last decade has seen a boom in the number of variant index structures featuring complex and diverse memory-time trade-offs. This article brings a comprehensive state-of-the-art overview of the most popular index structures and their recently developed variants. Their features, interrelationships, the trade-offs they impose, but also their practical limitations, are explained and compared. PMID:22584621
Expectation, information processing, and subjective duration.
Simchy-Gross, Rhimmon; Margulis, Elizabeth Hellmuth
2018-01-01
In research on psychological time, it is important to examine the subjective duration of entire stimulus sequences, such as those produced by music (Teki, Frontiers in Neuroscience, 10, 2016). Yet research on the temporal oddball illusion (according to which oddball stimuli seem longer than standard stimuli of the same duration) has examined only the subjective duration of single events contained within sequences, not the subjective duration of sequences themselves. Does the finding that oddballs seem longer than standards translate to entire sequences, such that entire sequences that contain oddballs seem longer than those that do not? Is this potential translation influenced by the mode of information processing-whether people are engaged in direct or indirect temporal processing? Two experiments aimed to answer both questions using different manipulations of information processing. In both experiments, musical sequences either did or did not contain oddballs (auditory sliding tones). To manipulate information processing, we varied the task (Experiment 1), the sequence event structure (Experiments 1 and 2), and the sequence familiarity (Experiment 2) independently within subjects. Overall, in both experiments, the sequences that contained oddballs seemed shorter than those that did not when people were engaged in direct temporal processing, but longer when people were engaged in indirect temporal processing. These findings support the dual-process contingency model of time estimation (Zakay, Attention, Perception & Psychophysics, 54, 656-664, 1993). Theoretical implications for attention-based and memory-based models of time estimation, the pacemaker accumulator and coding efficiency hypotheses of time perception, and dynamic attending theory are discussed.
Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov
2014-01-01
Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...