Science.gov

Sample records for bioinformatics system built

  1. ebTrack: an environmental bioinformatics system built upon ArrayTrack™

    PubMed Central

    Chen, Minjun; Martin, Jackson; Fang, Hong; Isukapalli, Sastry; Georgopoulos, Panos G; Welsh, William J; Tong, Weida

    2009-01-01

    ebTrack is being developed as an integrated bioinformatics system for environmental research and analysis by addressing the issues of integration, curation, management, first level analysis and interpretation of environmental and toxicological data from diverse sources. It is based on enhancements to the US FDA developed ArrayTrack™ system through additional analysis modules for gene expression data as well as through incorporation and linkages to modules for analysis of proteomic and metabonomic datasets that include tandem mass spectra. ebTrack uses a client-server architecture with the free and open source PostgreSQL as its database engine, and java tools for user interface, analysis, visualization, and web-based deployment. Several predictive tools that are critical for environmental health research are currently supported in ebTrack, including Significance Analysis of Microarray (SAM). Furthermore, new tools are under continuous integration, and interfaces to environmental health risk analysis tools are being developed in order to make ebTrack widely usable. These health risk analysis tools include the Modeling ENvironment for TOtal Risk studies (MENTOR) for source-to-dose exposure modeling and the DOse Response Information ANalysis system (DORIAN) for health outcome modeling. The design of ebTrack is presented in detail and steps involved in its application are summarized through an illustrative application. PMID:19278561

  2. Systems Biology, Bioinformatics, and Biomarkers in Neuropsychiatry

    PubMed Central

    Alawieh, Ali; Zaraket, Fadi A.; Li, Jian-Liang; Mondello, Stefania; Nokkari, Amaly; Razafsha, Mahdi; Fadlallah, Bilal; Boustany, Rose-Mary; Kobeissy, Firas H.

    2012-01-01

    Although neuropsychiatric (NP) disorders are among the top causes of disability worldwide with enormous financial costs, they can still be viewed as part of the most complex disorders that are of unknown etiology and incomprehensible pathophysiology. The complexity of NP disorders arises from their etiologic heterogeneity and the concurrent influence of environmental and genetic factors. In addition, the absence of rigid boundaries between the normal and diseased state, the remarkable overlap of symptoms among conditions, the high inter-individual and inter-population variations, and the absence of discriminative molecular and/or imaging biomarkers for these diseases makes difficult an accurate diagnosis. Along with the complexity of NP disorders, the practice of psychiatry suffers from a “top-down” method that relied on symptom checklists. Although checklist diagnoses cost less in terms of time and money, they are less accurate than a comprehensive assessment. Thus, reliable and objective diagnostic tools such as biomarkers are needed that can detect and discriminate among NP disorders. The real promise in understanding the pathophysiology of NP disorders lies in bringing back psychiatry to its biological basis in a systemic approach which is needed given the NP disorders’ complexity to understand their normal functioning and response to perturbation. This approach is implemented in the systems biology discipline that enables the discovery of disease-specific NP biomarkers for diagnosis and therapeutics. Systems biology involves the use of sophisticated computer software “omics”-based discovery tools and advanced performance computational techniques in order to understand the behavior of biological systems and identify diagnostic and prognostic biomarkers specific for NP disorders together with new targets of therapeutics. In this review, we try to shed light on the need of systems biology, bioinformatics, and biomarkers in neuropsychiatry, and

  3. SNPTrack™ : an integrated bioinformatics system for genetic association studies.

    PubMed

    Xu, Joshua; Kelly, Reagan; Zhou, Guangxu; Turner, Steven A; Ding, Don; Harris, Stephen C; Hong, Huixiao; Fang, Hong; Tong, Weida

    2012-01-01

    A genetic association study is a complicated process that involves collecting phenotypic data, generating genotypic data, analyzing associations between genotypic and phenotypic data, and interpreting genetic biomarkers identified. SNPTrack is an integrated bioinformatics system developed by the US Food and Drug Administration (FDA) to support the review and analysis of pharmacogenetics data resulting from FDA research or submitted by sponsors. The system integrates data management, analysis, and interpretation in a single platform for genetic association studies. Specifically, it stores genotyping data and single-nucleotide polymorphism (SNP) annotations along with study design data in an Oracle database. It also integrates popular genetic analysis tools, such as PLINK and Haploview. SNPTrack provides genetic analysis capabilities and captures analysis results in its database as SNP lists that can be cross-linked for biological interpretation to gene/protein annotations, Gene Ontology, and pathway analysis data. With SNPTrack, users can do the entire stream of bioinformatics jobs for genetic association studies. SNPTrack is freely available to the public at http://www.fda.gov/ScienceResearch/BioinformaticsTools/SNPTrack/default.htm. PMID:23245293

  4. Built-In Diagnostics (BID) Of Equipment/Systems

    NASA Technical Reports Server (NTRS)

    Granieri, Michael N.; Giordano, John P.; Nolan, Mary E.

    1995-01-01

    Diagnostician(TM)-on-Chip (DOC) technology identifies faults and commands systems reconfiguration. Smart microcontrollers operating in conjunction with other system-control circuits, command self-correcting system/equipment actions in real time. DOC microcontroller generates commands for associated built-in test equipment to stimulate unit of equipment diagnosed, collects and processes response data obtained by built-in test equipment, and performs diagnostic reasoning on response data, using diagnostic knowledge base derived from design data.

  5. Stroke of GENEous: A Tool for Teaching Bioinformatics to Information Systems Majors

    ERIC Educational Resources Information Center

    Tikekar, Rahul

    2006-01-01

    A tool for teaching bioinformatics concepts to information systems majors is described. Biological data are available from numerous sources and a good knowledge of biology is needed to understand much of these data. As the subject of bioinformatics gains popularity among computer and information science course offerings, it will become essential…

  6. Using Attributes of Natural Systems to Plan the Built Environment

    EPA Science Inventory

    The concept of 'protection' is possible only before something is lost, however, development of the built environment to meet human needs also compromises the environmental systems that sustain human life. Because maintaining an environment that is able to sustain human life requi...

  7. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    PubMed

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475

  8. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis

    PubMed Central

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475

  9. Transformers: Shape-Changing Space Systems Built with Robotic Textiles

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    2013-01-01

    Prior approaches to transformer-like robots had only very limited success. They suffer from lack of reliability, ability to integrate large surfaces, and very modest change in overall shape. Robots can now be built from two-dimensional (2D) layers of robotic fabric. These transformers, a new kind of robotic space system, are dramatically different from current systems in at least two ways. First, the entire transformer is built from a single, thin sheet; a flexible layer of a robotic fabric (ro-fabric); or robotic textile (ro-textile). Second, the ro-textile layer is foldable to small volume and self-unfolding to adapt shape and function to mission phases.

  10. Bioinformatics for transporter pharmacogenomics and systems biology: data integration and modeling with UML.

    PubMed

    Yan, Qing

    2010-01-01

    Bioinformatics is the rational study at an abstract level that can influence the way we understand biomedical facts and the way we apply the biomedical knowledge. Bioinformatics is facing challenges in helping with finding the relationships between genetic structures and functions, analyzing genotype-phenotype associations, and understanding gene-environment interactions at the systems level. One of the most important issues in bioinformatics is data integration. The data integration methods introduced here can be used to organize and integrate both public and in-house data. With the volume of data and the high complexity, computational decision support is essential for integrative transporter studies in pharmacogenomics, nutrigenomics, epigenetics, and systems biology. For the development of such a decision support system, object-oriented (OO) models can be constructed using the Unified Modeling Language (UML). A methodology is developed to build biomedical models at different system levels and construct corresponding UML diagrams, including use case diagrams, class diagrams, and sequence diagrams. By OO modeling using UML, the problems of transporter pharmacogenomics and systems biology can be approached from different angles with a more complete view, which may greatly enhance the efforts in effective drug discovery and development. Bioinformatics resources of membrane transporters and general bioinformatics databases and tools that are frequently used in transporter studies are also collected here. An informatics decision support system based on the models presented here is available at http://www.pharmtao.com/transporter . The methodology developed here can also be used for other biomedical fields. PMID:20419428

  11. Systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

    PubMed

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-01-01

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering. PMID:24709875

  12. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    PubMed Central

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-01-01

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering. PMID:24709875

  13. NETTAB 2014: From high-throughput structural bioinformatics to integrative systems biology.

    PubMed

    Romano, Paolo; Cordero, Francesca

    2016-01-01

    The fourteenth NETTAB workshop, NETTAB 2014, was devoted to a range of disciplines going from structural bioinformatics, to proteomics and to integrative systems biology. The topics of the workshop were centred around bioinformatics methods, tools, applications, and perspectives for models, standards and management of high-throughput biological data, structural bioinformatics, functional proteomics, mass spectrometry, drug discovery, and systems biology.43 scientific contributions were presented at NETTAB 2014, including keynote, special guest and tutorial talks, oral communications, and posters. Full papers from some of the best contributions presented at the workshop were later submitted to a special Call for this Supplement.Here, we provide an overview of the workshop and introduce manuscripts that have been accepted for publication in this Supplement. PMID:26960985

  14. Digital camera system built on JPEG2000 compression and decompression

    NASA Astrophysics Data System (ADS)

    Atsumi, Eiji

    2003-05-01

    Processing architecture for digital camera has been built on JPEG2000 compression system. Concerns are to minimize processing power and data traffic inside (data-bandwidth at interface) and out-side (compression efficiency) of camera system. Key idea is to decompose Bayer matrix data given from image sensor into four half-resolution planes instead of interpolating to three full-resolution planes. With a new compression standard, JPEG2000, capable of handling multi-component image, the four-plane representation can be encoded into a single bit-stream. The representation saves data traffic between image reconstruction stage and compression stage by 1/3 to 1/2 compared to the Bayer-interpolated data. Not only reduced processing power prior to and during compression but also competitive or superior compression efficiency is achieved. On reconstruction to full resolution is Bayer-interpolation and/or edge-enhancement required as a post-processing to a standard decoder, while half or smaller resolution image is reconstructed without a post-processing. For mobile terminals with an integrated camera (image reconstruction in camera h/w and compression in terminal processor), this scheme helps to accommodate increased resolution with all the limited data-bandwidth from camera to terminal processor and limited processing capability.

  15. Integration of Proteomics, Bioinformatics, and Systems Biology in Traumatic Brain Injury Biomarker Discovery

    PubMed Central

    Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.

    2013-01-01

    Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150

  16. Airborne fibre and asbestos concentrations in system built schools

    NASA Astrophysics Data System (ADS)

    Burdett, Garry; Cottrell, Steve; Taylor, Catherine

    2009-02-01

    This paper summarises the airborne fibre concentration data measured in system built schools that contained asbestos insulation board (AIB) enclosed in the support columns by a protective steel casing. The particular focus of this work was the CLASP (Consortium of Local Authorities Special Programme) system buildings. A variety of air monitoring tests were carried out to assess the potential for fibres to be released into the classroom. A peak release testing protocol was adopted that involved static sampling, while simulating direct impact disturbances to selected columns. This was carried out before remediation, after sealing gaps and holes in and around the casing visible in the room (i.e. below ceiling level) and additionally round the tops of the columns, which extended into the suspended ceiling void. Simulated and actual measurements of worker exposures were also undertaken, while sealing columns, carrying out cleaning and maintenance work in the ceiling voids. Routine analysis of these air samples was carried out by phase contrast microscopy (PCM) with a limited amount of analytical transmission electron microscopy (TEM) analysis to confirm whether the fibres visible by PCM were asbestos or non-asbestos. The PCM fibre concentrations data from the peak release tests showed that while direct releases of fibres to the room air can occur from gaps and holes in and around the column casings, sealing is an effective way of minimising releases to below the limit of quantification (0.01 f/ml) of the PCM method for some 95% of the tests carried out. Sealing with silicone filler and taping any gaps and seams visible on the column casing in the room, also gave concentrations below the limit of quantification (LOQ) of the PCM method for 95% of the tests carried out. The data available did not show any significant difference between the PCM fibre concentrations in the room air for columns that had or had not been sealed in the ceiling void, as well as in the room

  17. Advances in Omics and Bioinformatics Tools for Systems Analyses of Plant Functions

    PubMed Central

    Mochida, Keiichi; Shinozaki, Kazuo

    2011-01-01

    Omics and bioinformatics are essential to understanding the molecular systems that underlie various plant functions. Recent game-changing sequencing technologies have revitalized sequencing approaches in genomics and have produced opportunities for various emerging analytical applications. Driven by technological advances, several new omics layers such as the interactome, epigenome and hormonome have emerged. Furthermore, in several plant species, the development of omics resources has progressed to address particular biological properties of individual species. Integration of knowledge from omics-based research is an emerging issue as researchers seek to identify significance, gain biological insights and promote translational research. From these perspectives, we provide this review of the emerging aspects of plant systems research based on omics and bioinformatics analyses together with their associated resources and technological advances. PMID:22156726

  18. A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing

    PubMed Central

    Cantacessi, Cinzia; Jex, Aaron R.; Hall, Ross S.; Young, Neil D.; Campbell, Bronwyn E.; Joachim, Anja; Nolan, Matthew J.; Abubucker, Sahar; Sternberg, Paul W.; Ranganathan, Shoba; Mitreva, Makedonka; Gasser, Robin B.

    2010-01-01

    Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism. PMID:20682560

  19. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology.

    PubMed

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-11-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another. PMID:23554714

  20. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology

    PubMed Central

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-01-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another. PMID:23554714

  1. Quantitative Analysis of the Trends Exhibited by the Three Interdisciplinary Biological Sciences: Biophysics, Bioinformatics, and Systems Biology.

    PubMed

    Kang, Jonghoon; Park, Seyeon; Venkat, Aarya; Gopinath, Adarsh

    2015-12-01

    New interdisciplinary biological sciences like bioinformatics, biophysics, and systems biology have become increasingly relevant in modern science. Many papers have suggested the importance of adding these subjects, particularly bioinformatics, to an undergraduate curriculum; however, most of their assertions have relied on qualitative arguments. In this paper, we will show our metadata analysis of a scientific literature database (PubMed) that quantitatively describes the importance of the subjects of bioinformatics, systems biology, and biophysics as compared with a well-established interdisciplinary subject, biochemistry. Specifically, we found that the development of each subject assessed by its publication volume was well described by a set of simple nonlinear equations, allowing us to characterize them quantitatively. Bioinformatics, which had the highest ratio of publications produced, was predicted to grow between 77% and 93% by 2025 according to the model. Due to the large number of publications produced in bioinformatics, which nearly matches the number published in biochemistry, it can be inferred that bioinformatics is almost equal in significance to biochemistry. Based on our analysis, we suggest that bioinformatics be added to the standard biology undergraduate curriculum. Adding this course to an undergraduate curriculum will better prepare students for future research in biology. PMID:26753026

  2. Quantitative Analysis of the Trends Exhibited by the Three Interdisciplinary Biological Sciences: Biophysics, Bioinformatics, and Systems Biology

    PubMed Central

    Kang, Jonghoon; Park, Seyeon; Venkat, Aarya; Gopinath, Adarsh

    2015-01-01

    New interdisciplinary biological sciences like bioinformatics, biophysics, and systems biology have become increasingly relevant in modern science. Many papers have suggested the importance of adding these subjects, particularly bioinformatics, to an undergraduate curriculum; however, most of their assertions have relied on qualitative arguments. In this paper, we will show our metadata analysis of a scientific literature database (PubMed) that quantitatively describes the importance of the subjects of bioinformatics, systems biology, and biophysics as compared with a well-established interdisciplinary subject, biochemistry. Specifically, we found that the development of each subject assessed by its publication volume was well described by a set of simple nonlinear equations, allowing us to characterize them quantitatively. Bioinformatics, which had the highest ratio of publications produced, was predicted to grow between 77% and 93% by 2025 according to the model. Due to the large number of publications produced in bioinformatics, which nearly matches the number published in biochemistry, it can be inferred that bioinformatics is almost equal in significance to biochemistry. Based on our analysis, we suggest that bioinformatics be added to the standard biology undergraduate curriculum. Adding this course to an undergraduate curriculum will better prepare students for future research in biology. PMID:26753026

  3. Edge Bioinformatics

    Energy Science and Technology Software Center (ESTSC)

    2015-08-03

    Edge Bioinformatics is a developmental bioinformatics and data management platform which seeks to supply laboratories with bioinformatics pipelines for analyzing data associated with common samples case goals. Edge Bioinformatics enables sequencing as a solution and forward-deployed situations where human-resources, space, bandwidth, and time are limited. The Edge bioinformatics pipeline was designed based on following USE CASES and specific to illumina sequencing reads. 1. Assay performance adjudication (PCR): Analysis of an existing PCR assay in amore » genomic context, and automated design of a new assay to resolve conflicting results; 2. Clinical presentation with extreme symptoms: Characterization of a known pathogen or co-infection with a. Novel emerging disease outbreak or b. Environmental surveillance« less

  4. Edge Bioinformatics

    SciTech Connect

    Lo, Chien-Chi

    2015-08-03

    Edge Bioinformatics is a developmental bioinformatics and data management platform which seeks to supply laboratories with bioinformatics pipelines for analyzing data associated with common samples case goals. Edge Bioinformatics enables sequencing as a solution and forward-deployed situations where human-resources, space, bandwidth, and time are limited. The Edge bioinformatics pipeline was designed based on following USE CASES and specific to illumina sequencing reads. 1. Assay performance adjudication (PCR): Analysis of an existing PCR assay in a genomic context, and automated design of a new assay to resolve conflicting results; 2. Clinical presentation with extreme symptoms: Characterization of a known pathogen or co-infection with a. Novel emerging disease outbreak or b. Environmental surveillance

  5. Early Warning System: a juridical notion to be built

    NASA Astrophysics Data System (ADS)

    Lucarelli, A.

    2007-12-01

    Early warning systems (EWS) are becoming effective tools for real time mitigation of the harmful effects arising from widely different hazards, which range from famine to financial crisis, malicious attacks, industrial accidents, natural catastrophes, etc. Early warning of natural catastrophic events allows to implement both alert systems and real time prevention actions for the safety of people and goods exposed to the risk However the effective implementation of early warning methods is hindered by the lack of a specific juridical frame. Under a juridical point of view, in fact, EWS and in general all the activities of prevention need a careful regulation, mainly with regards to responsibility and possible compensation for damage caused by the implemented actions. A preventive alarm, in fact, has an active influence on infrastructures in control of public services which in turn will suffer suspensions or interruptions because of the early warning actions. From here it is necessary to possess accurate normative references related to the typology of structures or infrastructures upon which the activity of readiness acts; the progressive order of suspension of public services; the duration of these suspensions; the corporate bodies or administrations that are competent to assume such decisions; the actors responsible for the consequences of false alarm, missed or delayed alarms; the mechanisms of compensation for damage; the insurance systems; etc In the European Union EWS are often quoted as preventive methods of mitigation of the risk. Nevertheless, a juridical notion of EWS of general use is not available. In fact, EW is a concept that finds application in many different circles, each of which require specific adaptations, and may concern subjects for which the European Union doesn't have exclusive competence as may be the responsibility of the member states to assign the necessary regulations. In so far as the juridical arrangement of the EWS, this must be

  6. A bioinformatics expert system linking functional data to anatomical outcomes in limb regeneration

    PubMed Central

    Lobo, Daniel; Feldman, Erica B.; Shah, Michelle; Malone, Taylor J.

    2014-01-01

    Abstract Amphibians and molting arthropods have the remarkable capacity to regenerate amputated limbs, as described by an extensive literature of experimental cuts, amputations, grafts, and molecular techniques. Despite a rich history of experimental effort, no comprehensive mechanistic model exists that can account for the pattern regulation observed in these experiments. While bioinformatics algorithms have revolutionized the study of signaling pathways, no such tools have heretofore been available to assist scientists in formulating testable models of large‐scale morphogenesis that match published data in the limb regeneration field. Major barriers to preventing an algorithmic approach are the lack of formal descriptions for experimental regenerative information and a repository to centralize storage and mining of functional data on limb regeneration. Establishing a new bioinformatics of shape would significantly accelerate the discovery of key insights into the mechanisms that implement complex regeneration. Here, we describe a novel mathematical ontology for limb regeneration to unambiguously encode phenotype, manipulation, and experiment data. Based on this formalism, we present the first centralized formal database of published limb regeneration experiments together with a user‐friendly expert system tool to facilitate its access and mining. These resources are freely available for the community and will assist both human biologists and artificial intelligence systems to discover testable, mechanistic models of limb regeneration. PMID:25729585

  7. Towards a career in bioinformatics

    PubMed Central

    2009-01-01

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation from 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 9-11, 2009 at Biopolis, Singapore. InCoB has actively engaged researchers from the area of life sciences, systems biology and clinicians, to facilitate greater synergy between these groups. To encourage bioinformatics students and new researchers, tutorials and student symposium, the Singapore Symposium on Computational Biology (SYMBIO) were organized, along with the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and the Clinical Bioinformatics (CBAS) Symposium. However, to many students and young researchers, pursuing a career in a multi-disciplinary area such as bioinformatics poses a Himalayan challenge. A collection to tips is presented here to provide signposts on the road to a career in bioinformatics. An overview of the application of bioinformatics to traditional and emerging areas, published in this supplement, is also presented to provide possible future avenues of bioinformatics investigation. A case study on the application of e-learning tools in undergraduate bioinformatics curriculum provides information on how to go impart targeted education, to sustain bioinformatics in the Asia-Pacific region. The next InCoB is scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. PMID:19958508

  8. An approach to built-in test for shipboard machinery systems

    NASA Astrophysics Data System (ADS)

    Hegner, H. R.

    This paper presents an approach for incorporating built-in test (BIT) into shipboard machinery systems. BIT, as used herein, denotes both built-in test and on-line monitoring. Since sensors are a key element to a successful machinery monitoring system, an assessment of shipboard sensors is included in the paper. Specific design examples are also presented for a marine diesel engine, gas turbine engine, and air conditioning plant.

  9. Bioinformatics and systems biology: bridging the gap between heterogeneous student backgrounds.

    PubMed

    Abeln, Sanne; Molenaar, Douwe; Feenstra, K Anton; Hoefsloot, Huub C J; Teusink, Bas; Heringa, Jaap

    2013-09-01

    Teaching students with very diverse backgrounds can be extremely challenging. This article uses the Bioinformatics and Systems Biology MSc in Amsterdam as a case study to describe how the knowledge gap for students with heterogeneous backgrounds can be bridged. We show that a mix in backgrounds can be turned into an advantage by creating a stimulating learning environment for the students. In the MSc Programme, conversion classes help to bridge differences between students, by mending initial knowledge and skill gaps. Mixing students from different backgrounds in a group to solve a complex task creates an opportunity for the students to reflect on their own abilities. We explain how a truly interdisciplinary approach to teaching helps students of all backgrounds to achieve the MSc end terms. Moreover, transferable skills obtained by the students in such a mixed study environment are invaluable for their later careers. PMID:23603092

  10. Analyses of Brucella Pathogenesis, Host Immunity, and Vaccine Targets using Systems Biology and Bioinformatics

    PubMed Central

    He, Yongqun

    2011-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omics (including genomics, transcriptomics, and proteomics) and bioinformatics technologies for the analysis of Brucella pathogenesis, host immune responses, and vaccine targets. Based on more than 30 sequenced Brucella genomes, comparative genomics is able to identify gene variations among Brucella strains that help to explain host specificity and virulence differences among Brucella species. Diverse transcriptomics and proteomics gene expression studies have been conducted to analyze gene expression profiles of wild type Brucella strains and mutants under different laboratory conditions. High throughput Omics analyses of host responses to infections with virulent or attenuated Brucella strains have been focused on responses by mouse and cattle macrophages, bovine trophoblastic cells, mouse and boar splenocytes, and ram buffy coat. Differential serum responses in humans and rams to Brucella infections have been analyzed using high throughput serum antibody screening technology. The Vaxign reverse vaccinology has been used to predict many Brucella vaccine targets. More than 180 Brucella virulence factors and their gene interaction networks have been identified using advanced literature mining methods. The recent development of community-based Vaccine Ontology and Brucellosis Ontology provides an efficient way for Brucella data integration, exchange, and computer-assisted automated reasoning. PMID:22919594

  11. Specifying, Installing and Maintaining Built-Up and Modified Bitumen Roofing Systems.

    ERIC Educational Resources Information Center

    Hobson, Joseph W.

    2000-01-01

    Examines built-up, modified bitumen, and hybrid combinations of the two roofing systems and offers advise on how to assure high- quality performance and durability when using them. Included is a glossary of commercial roofing terms and asphalt roofing resources to aid in making decisions on roofing and systems product selection. (GR)

  12. KDE Bioscience: platform for bioinformatics analysis workflows.

    PubMed

    Lu, Qiang; Hao, Pei; Curcin, Vasa; He, Weizhong; Li, Yuan-Yuan; Luo, Qing-Ming; Guo, Yi-Ke; Li, Yi-Xue

    2006-08-01

    Bioinformatics is a dynamic research area in which a large number of algorithms and programs have been developed rapidly and independently without much consideration so far of the need for standardization. The lack of such common standards combined with unfriendly interfaces make it difficult for biologists to learn how to use these tools and to translate the data formats from one to another. Consequently, the construction of an integrative bioinformatics platform to facilitate biologists' research is an urgent and challenging task. KDE Bioscience is a java-based software platform that collects a variety of bioinformatics tools and provides a workflow mechanism to integrate them. Nucleotide and protein sequences from local flat files, web sites, and relational databases can be entered, annotated, and aligned. Several home-made or 3rd-party viewers are built-in to provide visualization of annotations or alignments. KDE Bioscience can also be deployed in client-server mode where simultaneous execution of the same workflow is supported for multiple users. Moreover, workflows can be published as web pages that can be executed from a web browser. The power of KDE Bioscience comes from the integrated algorithms and data sources. With its generic workflow mechanism other novel calculations and simulations can be integrated to augment the current sequence analysis functions. Because of this flexible and extensible architecture, KDE Bioscience makes an ideal integrated informatics environment for future bioinformatics or systems biology research. PMID:16260186

  13. Agile parallel bioinformatics workflow management using Pwrake

    PubMed Central

    2011-01-01

    Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability

  14. A systems approach to resilience in the built environment: the case of Cuba.

    PubMed

    Lizarralde, Gonzalo; Valladares, Arturo; Olivera, Andres; Bornstein, Lisa; Gould, Kevin; Barenstein, Jennifer Duyne

    2015-01-01

    Through its capacity to evoke systemic adaptation before and after disasters, resilience has become a seductive theory in disaster management. Several studies have linked the concept with systems theory; however, they have been mostly based on theoretical models with limited empirical support. The study of the Cuban model of resilience sheds light on the variables that create systemic resilience in the built environment and its relations with the social and natural environments. Cuba is vulnerable to many types of hazard, yet the country's disaster management benefits from institutional, health and education systems that develop social capital, knowledge and other assets that support construction industry and housing development, systematic urban and regional planning, effective alerts, and evacuation plans. The Cuban political context is specific, but the study can nonetheless contribute to systemic improvements to the resilience of built environments in other contexts. PMID:25494958

  15. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity.

    PubMed

    van den Berg, Magdalena M H E; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N M; van Mechelen, Willem; van den Berg, Agnes E

    2015-12-01

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space. PMID:26694426

  16. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity

    PubMed Central

    van den Berg, Magdalena M.H.E.; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N.M.; van Mechelen, Willem; van den Berg, Agnes E.

    2015-01-01

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space. PMID:26694426

  17. Design and Implementation of a Custom Built Optical Projection Tomography System

    PubMed Central

    Wong, Michael D.; Dazai, Jun; Walls, Johnathon R.; Gale, Nicholas W.; Henkelman, R. Mark

    2013-01-01

    Optical projection tomography (OPT) is an imaging modality that has, in the last decade, answered numerous biological questions owing to its ability to view gene expression in 3 dimensions (3D) at high resolution for samples up to several cm3. This has increased demand for a cabinet OPT system, especially for mouse embryo phenotyping, for which OPT was primarily designed for. The Medical Research Council (MRC) Technology group (UK) released a commercial OPT system, constructed by Skyscan, called the Bioptonics OPT 3001 scanner that was installed in a limited number of locations. The Bioptonics system has been discontinued and currently there is no commercial OPT system available. Therefore, a few research institutions have built their own OPT system, choosing parts and a design specific to their biological applications. Some of these custom built OPT systems are preferred over the commercial Bioptonics system, as they provide improved performance based on stable translation and rotation stages and up to date CCD cameras coupled with objective lenses of high numerical aperture, increasing the resolution of the images. Here, we present a detailed description of a custom built OPT system that is robust and easy to build and install. Included is a hardware parts list, instructions for assembly, a description of the acquisition software and a free download site, and methods for calibration. The described OPT system can acquire a full 3D data set in 10 minutes at 6.7 micron isotropic resolution. The presented guide will hopefully increase adoption of OPT throughout the research community, for the OPT system described can be implemented by personnel with minimal expertise in optics or engineering who have access to a machine shop. PMID:24023880

  18. Design and implementation of a custom built optical projection tomography system.

    PubMed

    Wong, Michael D; Dazai, Jun; Walls, Johnathon R; Gale, Nicholas W; Henkelman, R Mark

    2013-01-01

    Optical projection tomography (OPT) is an imaging modality that has, in the last decade, answered numerous biological questions owing to its ability to view gene expression in 3 dimensions (3D) at high resolution for samples up to several cm(3). This has increased demand for a cabinet OPT system, especially for mouse embryo phenotyping, for which OPT was primarily designed for. The Medical Research Council (MRC) Technology group (UK) released a commercial OPT system, constructed by Skyscan, called the Bioptonics OPT 3001 scanner that was installed in a limited number of locations. The Bioptonics system has been discontinued and currently there is no commercial OPT system available. Therefore, a few research institutions have built their own OPT system, choosing parts and a design specific to their biological applications. Some of these custom built OPT systems are preferred over the commercial Bioptonics system, as they provide improved performance based on stable translation and rotation stages and up to date CCD cameras coupled with objective lenses of high numerical aperture, increasing the resolution of the images. Here, we present a detailed description of a custom built OPT system that is robust and easy to build and install. Included is a hardware parts list, instructions for assembly, a description of the acquisition software and a free download site, and methods for calibration. The described OPT system can acquire a full 3D data set in 10 minutes at 6.7 micron isotropic resolution. The presented guide will hopefully increase adoption of OPT throughout the research community, for the OPT system described can be implemented by personnel with minimal expertise in optics or engineering who have access to a machine shop. PMID:24023880

  19. Tank Monitoring and Document control System (TMACS) As Built Software Design Document

    SciTech Connect

    GLASSCOCK, J.A.

    2000-01-27

    This document describes the software design for the Tank Monitor and Control System (TMACS). This document captures the existing as-built design of TMACS as of November 1999. It will be used as a reference document to the system maintainers who will be maintaining and modifying the TMACS functions as necessary. The heart of the TMACS system is the ''point-processing'' functionality where a sample value is received from the field sensors and the value is analyzed, logged, or alarmed as required. This Software Design Document focuses on the point-processing functions.

  20. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  1. Factory Built-in Type Simplified OCT System for Industrial Application

    NASA Astrophysics Data System (ADS)

    Shiina, Tatsuo; Miyazaki, Satoshi; Honda, Toshio

    Factory built-in type simplified optical coherence tomography (OCT) system was developed for industrial use. The system design was supposed for check of the laser-welded resin. As a first approach, the current simplified OCT system for plant measurement was applied for the validation of the industrial sample; plastic resin. The industrial-use OCT was designed in response to the results. The performances of the measurement speed and range of the developed OCT system were 50scan/s and 5mm, respectively. The low coherence of 18.9μm could clearly distinguish the gap of 2 laser-welded resins. The system became compact and low price, and has the flexibility of epi-optics.

  2. MEIGO: an open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics

    PubMed Central

    2014-01-01

    Background Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. Results We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. Conclusions MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods. PMID:24885957

  3. Home-built magnetic resonance imaging system (0.3 T) with a complete digital spectrometer

    NASA Astrophysics Data System (ADS)

    Jie, Shen; Qin, Xu; Ying, Liu; Gengying, Li

    2005-10-01

    A home-built magnetic resonance imaging (MRI) system with a complete digital spectrometer has been designed for investigation of plants and animals. With the application of the latest digital integrated circuit technology, the digital spectrometer is greatly simplified without the loss of flexibility and performance. A powerful pulse sequence compiler with a graphical editor can allow the user to edit the pulse sequence more easily and more conveniently than ever before. Moreover, a permanent magnet capable of producing a 180 mm diam spherical homogeneous region is employed in our MRI system to ensure a comparatively large image size. Compared with previous work, our MRI system has the features of flexibility, relatively large imaging size, and low cost. Experimental results obtained with the proposed system are presented in this article.

  4. Bioinformatic Indications That COPI- and Clathrin-Based Transport Systems Are Not Present in Chloroplasts: An Arabidopsis Model

    PubMed Central

    Aronsson, Henrik

    2014-01-01

    Coated vesicle transport occurs in the cytosol of yeast, mammals and plants. It consists of three different transport systems, the COPI, COPII and clathrin coated vesicles (CCV), all of which participate in the transfer of proteins and lipids between different cytosolic compartments. There are also indications that chloroplasts have a vesicle transport system. Several putative chloroplast-localized proteins, including CPSAR1 and CPRabA5e with similarities to cytosolic COPII transport-related proteins, were detected in previous experimental and bioinformatics studies. These indications raised the hypothesis that a COPI- and/or CCV-related system may be present in chloroplasts, in addition to a COPII-related system. To test this hypothesis we bioinformatically searched for chloroplast proteins that may have similar functions to known cytosolic COPI and CCV components in the model plants Arabidopsis thaliana and Oryza sativa (subsp. japonica) (rice). We found 29 such proteins, based on domain similarity, in Arabidopsis, and 14 in rice. However, many components could not be identified and among the identified most have assigned roles that are not related to either COPI or CCV transport. We conclude that COPII is probably the only active vesicle system in chloroplasts, at least in the model plants. The evolutionary implications of the findings are discussed. PMID:25137124

  5. Crowdsourcing for bioinformatics

    PubMed Central

    Good, Benjamin M.; Su, Andrew I.

    2013-01-01

    Motivation: Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Results: Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume ‘microtasks’ and systems for solving high-difficulty ‘megatasks’. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches. Contact: bgood@scripps.edu PMID:23782614

  6. Enabling high-throughput data management systems biology: The Bioinformatics Resource Manager

    SciTech Connect

    Shah, Anuj R.; Singhal, Mudita; Klicker, Kyle R.; Stephan, Eric G.; Wiley, H. S.; Waters, Katrina M.

    2007-02-25

    The Bioinformatics Resource Manager (BRM) is a problem-solving environment that provides the user with data retrieval, management, analysis and visualization capabilities through all aspects of an experimental study. Designed in collaboration with biologists, BRM simplifies the integration of experimental data across platforms and with other publicly available information from external data sources. An analysis pipeline is facilitated within BRM by the seamless connectivity of user data with visual analytics tools, through reformatting of the data for easy import. BRM is developed using JAVATM and other open source technologies so that it can be freely distributable.

  7. Expert systems built by the Expert: An evaluation of OPS5

    NASA Technical Reports Server (NTRS)

    Jackson, Robert

    1987-01-01

    Two expert systems were written in OPS5 by the expert, a Ph.D. astronomer with no prior experience in artificial intelligence or expert systems, without the use of a knowledge engineer. The first system was built from scratch and uses 146 rules to check for duplication of scientific information within a pool of prospective observations. The second system was grafted onto another expert system and uses 149 additional rules to estimate the spacecraft and ground resources consumed by a set of prospective observations. The small vocabulary, the IF this occurs THEN do that logical structure of OPS5, and the ability to follow program execution allowed the expert to design and implement these systems with only the data structures and rules of another OPS5 system as an example. The modularity of the rules in OPS5 allowed the second system to modify the rulebase of the system onto which it was grafted without changing the code or the operation of that system. These experiences show that experts are able to develop their own expert systems due to the ease of programming and code reusability in OPS5.

  8. Finding the next-best scanner position for as-built modeling of piping systems

    NASA Astrophysics Data System (ADS)

    Kawashima, K.; Yamanishi, S.; Kanai, S.; Date, H.

    2014-06-01

    Renovation of plant equipment of petroleum refineries or chemical factories have recently been frequent, and the demand for 3D asbuilt modelling of piping systems is increasing rapidly. Terrestrial laser scanners are used very often in the measurement for as-built modelling. However, the tangled structures of the piping systems results in complex occluded areas, and these areas must be captured from different scanner positions. For efficient and exhaustive measurement of the piping system, the scanner should be placed at optimum positions where the occluded parts of the piping system are captured as much as possible in less scans. However, this "nextbest" scanner positions are usually determined by experienced operators, and there is no guarantee that these positions fulfil the optimum condition. Therefore, this paper proposes a computer-aided method of the optimal sequential view planning for object recognition in plant piping systems using a terrestrial laser scanner. In the method, a sequence of next-best positions of a terrestrial laser scanner specialized for as-built modelling of piping systems can be found without any a priori information of piping objects. Different from the conventional approaches for the next-best-view (NBV) problem, in the proposed method, piping objects in the measured point clouds are recognized right after an every scan, local occluded spaces occupied by the unseen piping systems are then estimated, and the best scanner position can be found so as to minimize these local occluded spaces. The simulation results show that our proposed method outperforms a conventional approach in recognition accuracy, efficiency and computational time.

  9. Bioinformatics for Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  10. Exploring the immunogenome with bioinformatics.

    PubMed

    de Bono, Bernard; Trowsdale, John

    2003-08-01

    A better description of the immune system can be afforded if the latest developments in bioinformatics are applied to integrate sequence with structure and function. Clear guidelines for the upgrade of the bioinformatic capability of the immunogenetics laboratory are discussed in the light of more powerful methods to detect homology, combined approaches to predict the three dimensional properties of a protein and a robust strategy to represent the biological role of a gene. PMID:14690048

  11. Initial clinical testing of a multi-spectral imaging system built on a smartphone platform

    NASA Astrophysics Data System (ADS)

    Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David

    2016-03-01

    Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.

  12. Design of a built-in health monitoring system for bolted thermal protection panels

    NASA Astrophysics Data System (ADS)

    Yang, Jinkyu; Chang, Fu-Kuo; Derriso, Mark M.

    2003-08-01

    Space vehicles require high performance thermal protection systems (TPS) that provide high temperature insulation capability with lower weight, high strength, and reliable integration with the existing system. Carbon-carbon panels mounted with bracket joints are potential future thermal protection systems with light weight, low creep, and high stiffness at high temperatures. However, the thermal protection system experiences a very harsh high-temperature and aerodynamic environment in addition to foreign object impacts. Damage or failure of panels without being detected can lead to catastrophe. Therefore, knowledge of the integrity of the thermal protection system before each launch and reentry is essential to the success of the mission. The objective of the study is to develop a built-in diagnostic system to assess the integrity of TPS panels as well as to lower inspection and maintenance time and costs. An integrated structural health monitoring system is being developed to monitor the TPS panels. The technology includes investigation of the loosening of bolts which connects TPS panels to the supporting structure, and potentially, identifying the location of damage on the panel caused by external impacts from micrometeorites and other objects. The first generation prototype was manufactured and tested in an acoustic chamber which simulated a re-entry environment to investigate the feasibility of the health monitoring system focusing on its survivability and sensitivity. The preliminary results were very promising. Based on the test results, the second generation design was proposed to improve the performance of the first generation design. To put a reliable and accurate decision on the diagnostics of the TPS panels, an advanced algorithm was developed with the aid of a wavelet transform technique.

  13. An Undergraduate-Built Prototype Altitude Determination System (PADS) for High Altitude Research Balloons.

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Abot, J.; Casarotto, V.; Dichoso, J.; Doody, E.; Esteves, F.; Morsch Filho, E.; Gonteski, D.; Lamos, M.; Leo, A.; Mulder, N.; Matubara, F.; Schramm, P.; Silva, R.; Quisberth, J.; Uritsky, G.; Kogut, A.; Lowe, L.; Mirel, P.; Lazear, J.

    2014-12-01

    In this project a multi-disciplinary undergraduate team from CUA, comprising majors in Physics, Mechanical Engineering, Electrical Engineering, and Biology, design, build, test, fly, and analyze the data from a prototype attitude determination system (PADS). The goal of the experiment is to determine if an inexpensive attitude determination system could be built for high altitude research balloons using MEMS gyros. PADS is a NASA funded project, built by students with the cooperation of CUA faculty, Verner, Bruhweiler, and Abot, along with the contributed expertise of researchers and engineers at NASA/GSFC, Kogut, Lowe, Mirel, and Lazear. The project was initiated through a course taught in CUA's School of Engineering, which was followed by a devoted effort by students during the summer of 2014. The project is an experiment to use 18 MEMS gyros, similar to those used in many smartphones, to produce an averaged positional error signal that could be compared with the motion of the fixed optical system as recorded through a string of optical images of stellar fields to be stored on a hard drive flown with the experiment. The optical system, camera microprocessor, and hard drive are enclosed in a pressure vessel, which maintains approximately atmospheric pressure throughout the balloon flight. The experiment uses multiple microprocessors to control the camera exposures, record gyro data, and provide thermal control. CUA students also participated in NASA-led design reviews. Four students traveled to NASA's Columbia Scientific Balloon Facility in Palestine, Texas to integrate PADS into a large balloon gondola containing other experiments, before being shipped, then launched in mid-August at Ft. Sumner, New Mexico. The payload is to fly at a float altitude of 40-45,000 m, and the flight last approximately 15 hours. The payload is to return to earth by parachute and the retrieved data are to be analyzed by CUA undergraduates. A description of the instrument is presented

  14. Measurement of airflow and pressure characteristics of a fan built in a car ventilation system

    NASA Astrophysics Data System (ADS)

    Pokorný, Jan; Poláček, Filip; Fojtlín, Miloš; Fišer, Jan; Jícha, Miroslav

    2016-03-01

    The aim of this study was to identify a set of operating points of a fan built in ventilation system of our test car. These operating points are given by the fan pressure characteristics and are defined by a pressure drop of the HVAC system (air ducts and vents) and volumetric flow rate of ventilation air. To cover a wide range of pressure drops situations, four cases of vent flaps setup were examined: (1) all vents opened, (2) only central vents closed (3) only central vents opened and (4) all vents closed. To cover a different volumetric flows, the each case was measured at least for four different speeds of fan defined by the fan voltage. It was observed that the pressure difference of the fan is proportional to the fan voltage and strongly depends on the throttling of the air distribution system by the settings of the vents flaps. In case of our test car we identified correlations between volumetric flow rate of ventilation air, fan pressure difference and fan voltage. These correlations will facilitate and reduce time costs of the following experiments with this test car.

  15. Note: Design and implementation of a home-built imaging system with low jitter for cold atom experiments.

    PubMed

    Hachtel, A J; Gillette, M C; Clements, E R; Zhong, S; Weeks, M R; Bali, S

    2016-05-01

    A novel home-built system for imaging cold atom samples is presented using a readily available astronomy camera which has the requisite sensitivity but no timing-control. We integrate the camera with LabVIEW achieving fast, low-jitter imaging with a convenient user-defined interface. We show that our system takes precisely timed millisecond exposures and offers significant improvements in terms of system jitter and readout time over previously reported home-built systems. Our system rivals current commercial "black box" systems in performance and user-friendliness. PMID:27250483

  16. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS).

    PubMed

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-01-01

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable. PMID:27271630

  17. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS)

    PubMed Central

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-01-01

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable. PMID:27271630

  18. Handheld electrocardiogram measurement instrument using a new peak quantification method algorithm built on a system-on-chip embedded system

    NASA Astrophysics Data System (ADS)

    Chang Chien, Jia-Ren; Tai, Cheng-Chi

    2006-09-01

    This article reports on the new design and development of an electrocardiogram (ECG) measurement instrument built on a system-on-chip (SOC) embedded system. A new approach using the peak quantification method (PQM) for measuring the human heart rate is described. A computer, some medical equipment, and other facilities are often required for conducting the traditional ECG measurements. However, the monitors of such instruments have some disadvantages, e.g., bulky, not very easy to transport, expensive, and so forth. Hence, we propose a new design for ECG measurement which is built on an embedded system. Our system adopts a SOC and ECG detection circuits to carry out a real-time, low-cost, and compact ECG measurement system. Regarding heart rate computation, the experimental results show that the new PQM algorithm, when applied to heart rate measurements, yields error smaller than 1bpm. In addition, the correlation coefficient between the measured and actual heartbeats can reach 0.94 when the heart rate is less than 153bpm. It shows that the use of the PQM algorithm gives an extremely high degree of accuracy.

  19. Recommendation Systems for Geoscience Data Portals Built by Analyzing Usage Patterns

    NASA Astrophysics Data System (ADS)

    Crosby, C.; Nandigam, V.; Baru, C.

    2009-04-01

    Since its launch five years ago, the National Science Foundation-funded GEON Project (www.geongrid.org) has been providing access to a variety of geoscience data sets such as geologic maps and other geographic information system (GIS)-oriented data, paleontologic databases, gravity and magnetics data and LiDAR topography via its online portal interface. In addition to data, the GEON Portal also provides web-based tools and other resources that enable users to process and interact with data. Examples of these tools include functions to dynamically map and integrate GIS data, compute synthetic seismograms, and to produce custom digital elevation models (DEMs) with user defined parameters such as resolution. The GEON portal built on the Gridsphere-portal framework allows us to capture user interaction with the system. In addition to the site access statistics captured by tools like Google Analystics which capture hits per unit time, search key words, operating systems, browsers, and referring sites, we also record additional statistics such as which data sets are being downloaded and in what formats, processing parameters, and navigation pathways through the portal. With over four years of data now available from the GEON Portal, this record of usage is a rich resource for exploring how earth scientists discover and utilize online data sets. Furthermore, we propose that this data could ultimately be harnessed to optimize the way users interact with the data portal, design intelligent processing and data management systems, and to make recommendations on algorithm settings and other available relevant data. The paradigm of integrating popular and commonly used patterns to make recommendations to a user is well established in the world of e-commerce where users receive suggestions on books, music and other products that they may find interesting based on their website browsing and purchasing history, as well as the patterns of fellow users who have made similar

  20. Pattern recognition in bioinformatics.

    PubMed

    de Ridder, Dick; de Ridder, Jeroen; Reinders, Marcel J T

    2013-09-01

    Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained. PMID:23559637

  1. Experiences with Testing the Largest Ground System NASA Has Ever Built

    NASA Technical Reports Server (NTRS)

    Lehtonen, Ken; Messerly, Robert

    2003-01-01

    In the 1980s, the National Aeronautics and Space Administration (NASA) embarked upon a major Earth-focused program called Mission to Planet Earth. The Goddard Space Flight Center (GSFC) was selected to manage and develop a key component - the Earth Observing System (EOS). The EOS consisted of four major missions designed to monitor the Earth. The missions included 4 spacecraft. Terra (launched December 1999), Aqua (launched May 2002), ICESat (Ice, Cloud, and Land Elevation Satellite, launched January 2003), and Aura (scheduled for launch January 2004). The purpose of these missions was to provide support for NASA s long-term research effort for determining how human-induced and natural changes affect our global environment. The EOS Data and Information System (EOSDIS), a globally distributed, large-scale scientific system, was built to support EOS. Its primary function is to capture, collect, process, and distribute the most voluminous set of remotely sensed scientific data to date estimated to be 350 Gbytes per day. The EOSDIS is composed of a diverse set of elements with functional capabilities that require the implementation of a complex set of computers, high-speed networks, mission-unique equipment, and associated Information Technology (IT) software along with mission-specific software. All missions are constrained by schedule, budget, and staffing resources, and rigorous testing has been shown to be critical to the success of each mission. This paper addresses the challenges associated with the planning, test definition. resource scheduling, execution, and discrepancy reporting involved in the mission readiness testing of a ground system on the scale of EOSDIS. The size and complexity of the mission systems supporting the Aqua flight operations, for example, combined with the limited resources available, prompted the project to challenge the prevailing testing culture. The resulting success of the Aqua Mission Readiness Testing (MRT) program was due in no

  2. Bioinformatics and genomic medicine.

    PubMed

    Kim, Ju Han

    2002-01-01

    Bioinformatics is a rapidly emerging field of biomedical research. A flood of large-scale genomic and postgenomic data means that many of the challenges in biomedical research are now challenges in computational science. Clinical informatics has long developed methodologies to improve biomedical research and clinical care by integrating experimental and clinical information systems. The informatics revolution in both bioinformatics and clinical informatics will eventually change the current practice of medicine, including diagnostics, therapeutics, and prognostics. Postgenome informatics, powered by high-throughput technologies and genomic-scale databases, is likely to transform our biomedical understanding forever, in much the same way that biochemistry did a generation ago. This paper describes how these technologies will impact biomedical research and clinical care, emphasizing recent advances in biochip-based functional genomics and proteomics. Basic data preprocessing with normalization and filtering, primary pattern analysis, and machine-learning algorithms are discussed. Use of integrative biochip informatics technologies, including multivariate data projection, gene-metabolic pathway mapping, automated biomolecular annotation, text mining of factual and literature databases, and the integrated management of biomolecular databases, are also discussed. PMID:12544491

  3. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Xiang, Fang; Ningqiu, Li; Xiaozhe, Fu; Kaibin, Li; Qiang, Lin; Lihui, Liu; Cunbin, Shi; Shuqin, Wu

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects. PMID:26351170

  4. Using Geographic Information Systems (GIS) to assess the role of the built environment in influencing obesity: a glossary

    PubMed Central

    2011-01-01

    Features of the built environment are increasingly being recognised as potentially important determinants of obesity. This has come about, in part, because of advances in methodological tools such as Geographic Information Systems (GIS). GIS has made the procurement of data related to the built environment easier and given researchers the flexibility to create a new generation of environmental exposure measures such as the travel time to the nearest supermarket or calculations of the amount of neighbourhood greenspace. Given the rapid advances in the availability of GIS data and the relative ease of use of GIS software, a glossary on the use of GIS to assess the built environment is timely. As a case study, we draw on aspects the food and physical activity environments as they might apply to obesity, to define key GIS terms related to data collection, concepts, and the measurement of environmental features. PMID:21722367

  5. Computational intelligence techniques in bioinformatics.

    PubMed

    Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I

    2013-12-01

    Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. PMID:23891719

  6. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  7. The origins of bioinformatics.

    PubMed

    Hagen, J B

    2000-12-01

    Bioinformatics is often described as being in its infancy, but computers emerged as important tools in molecular biology during the early 1960s. A decade before DNA sequencing became feasible, computational biologists focused on the rapidly accumulating data from protein biochemistry. Without the benefits of super computers or computer networks, these scientists laid important conceptual and technical foundations for bioinformatics today. PMID:11252753

  8. BioSig: A bioinformatic system for studying the mechanism of intra-cell signaling

    SciTech Connect

    Parvin, B.; Cong, G.; Fontenay, G.; Taylor, J.; Henshall, R.; Barcellos-Hoff, M.H.

    2000-12-15

    Mapping inter-cell signaling pathways requires an integrated view of experimental and informatic protocols. BioSig provides the foundation of cataloging inter-cell responses as a function of particular conditioning, treatment, staining, etc. for either in vivo or in vitro experiments. This paper outlines the system architecture, a functional data model for representing experimental protocols, algorithms for image analysis, and the required statistical analysis. The architecture provides remote shared operation of an inverted optical microscope, and couples instrument operation with images acquisition and annotation. The information is stored in an object-oriented database. The algorithms extract structural information such as morphology and organization, and map it to functional information such as inter-cellular responses. An example of usage of this system is included.

  9. Component-Based Approach for Educating Students in Bioinformatics

    ERIC Educational Resources Information Center

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  10. Integration of systems glycobiology with bioinformatics toolboxes, glycoinformatics resources, and glycoproteomics data.

    PubMed

    Liu, Gang; Neelamegham, Sriram

    2015-01-01

    The glycome constitutes the entire complement of free carbohydrates and glycoconjugates expressed on whole cells or tissues. 'Systems Glycobiology' is an emerging discipline that aims to quantitatively describe and analyse the glycome. Here, instead of developing a detailed understanding of single biochemical processes, a combination of computational and experimental tools are used to seek an integrated or 'systems-level' view. This can explain how multiple biochemical reactions and transport processes interact with each other to control glycome biosynthesis and function. Computational methods in this field commonly build in silico reaction network models to describe experimental data derived from structural studies that measure cell-surface glycan distribution. While considerable progress has been made, several challenges remain due to the complex and heterogeneous nature of this post-translational modification. First, for the in silico models to be standardized and shared among laboratories, it is necessary to integrate glycan structure information and glycosylation-related enzyme definitions into the mathematical models. Second, as glycoinformatics resources grow, it would be attractive to utilize 'Big Data' stored in these repositories for model construction and validation. Third, while the technology for profiling the glycome at the whole-cell level has been standardized, there is a need to integrate mass spectrometry derived site-specific glycosylation data into the models. The current review discusses progress that is being made to resolve the above bottlenecks. The focus is on how computational models can bridge the gap between 'data' generated in wet-laboratory studies with 'knowledge' that can enhance our understanding of the glycome. PMID:25871730

  11. Clinical Bioinformatics: challenges and opportunities

    PubMed Central

    2012-01-01

    Background Network Tools and Applications in Biology (NETTAB) Workshops are a series of meetings focused on the most promising and innovative ICT tools and to their usefulness in Bioinformatics. The NETTAB 2011 workshop, held in Pavia, Italy, in October 2011 was aimed at presenting some of the most relevant methods, tools and infrastructures that are nowadays available for Clinical Bioinformatics (CBI), the research field that deals with clinical applications of bioinformatics. Methods In this editorial, the viewpoints and opinions of three world CBI leaders, who have been invited to participate in a panel discussion of the NETTAB workshop on the next challenges and future opportunities of this field, are reported. These include the development of data warehouses and ICT infrastructures for data sharing, the definition of standards for sharing phenotypic data and the implementation of novel tools to implement efficient search computing solutions. Results Some of the most important design features of a CBI-ICT infrastructure are presented, including data warehousing, modularity and flexibility, open-source development, semantic interoperability, integrated search and retrieval of -omics information. Conclusions Clinical Bioinformatics goals are ambitious. Many factors, including the availability of high-throughput "-omics" technologies and equipment, the widespread availability of clinical data warehouses and the noteworthy increase in data storage and computational power of the most recent ICT systems, justify research and efforts in this domain, which promises to be a crucial leveraging factor for biomedical research. PMID:23095472

  12. Volarea - a bioinformatics tool to calculate the surface area and the volume of molecular systems.

    PubMed

    Ribeiro, João V; Tamames, Juan A C; Cerqueira, Nuno M F S A; Fernandes, Pedro A; Ramos, Maria J

    2013-12-01

    We have developed a computer program named 'VolArea' that allows for a rapid and fully automated analysis of molecular structures. The software calculates the surface area and the volume of molecular structures, as well as the volume of molecular cavities. The surface area facility can be used to calculate the solvent-exposed surface area of a molecule or the contact area between two molecules. The volume algorithm can be used to predict not only the space occupied by any molecular structure, but also the volume of cavities, such as tunnels or clefts. The software finds wide application in the characterization of systems, such as protein/ligand complexes, enzyme active sites, protein/protein interfaces, enzyme channels, membrane pores, solvent tunnels, among others. Some examples are given to illustrate its potential. VolArea is as a plug-in of the widely distributed software Visual Molecular Dynamics (VMD) and is freely available at http://www.fc.up.pt/PortoBioComp/Software/Volarea/Home.html. PMID:24164915

  13. Crimean-Congo Hemorrhagic Fever Virus Gn Bioinformatic Analysis and Construction of a Recombinant Bacmid in Order to Express Gn by Baculovirus Expression System

    PubMed Central

    Rahpeyma, Mehdi; Fotouhi, Fatemeh; Makvandi, Manouchehr; Ghadiri, Ata; Samarbaf-Zadeh, Alireza

    2015-01-01

    Background Crimean-Congo hemorrhagic fever virus (CCHFV) is a member of the nairovirus, a genus in the Bunyaviridae family, which causes a life threatening disease in human. Currently, there is no vaccine against CCHFV and detailed structural analysis of CCHFV proteins remains undefined. The CCHFV M RNA segment encodes two viral surface glycoproteins known as Gn and Gc. Viral glycoproteins can be considered as key targets for vaccine development. Objectives The current study aimed to investigate structural bioinformatics of CCHFV Gn protein and design a construct to make a recombinant bacmid to express by baculovirus system. Materials and Methods To express the Gn protein in insect cells that can be used as antigen in animal model vaccine studies. Bioinformatic analysis of CCHFV Gn protein was performed and designed a construct and cloned into pFastBacHTb vector and a recombinant Gn-bacmid was generated by Bac to Bac system. Results Primary, secondary, and 3D structure of CCHFV Gn were obtained and PCR reaction with M13 forward and reverse primers confirmed the generation of recombinant bacmid DNA harboring Gn coding region under polyhedron promoter. Conclusions Characterization of the detailed structure of CCHFV Gn by bioinformatics software provides the basis for development of new experiments and construction of a recombinant bacmid harboring CCHFV Gn, which is valuable for designing a recombinant vaccine against deadly pathogens like CCHFV. PMID:26862379

  14. Systems based on photogrammetry to evaluation of built heritage: tentative guidelines and control parameters

    NASA Astrophysics Data System (ADS)

    Valença, J.

    2014-06-01

    Technological innovations based on close-range imaging have arisen. The developments are related with both the advances in mathematical algorithms and acquisition equipment. This evolution allows to acquire data with large and powerful sensors and the fast and efficient processing of data. In general, the preservation of built heritage have applied these technological innovations very successfully in their different areas of intervention, namely, photogrammetry, digital image processing and multispectral image analysis. Furthermore, commercial packages of software and hardware have emerged. Thus, guidelines to best-practice procedures and to validate the results usually obtained should be established. Therefore, simple and easy to understand concepts, even for nonexperts in the field, should relate the characteristics of: (i) objects under study; (ii) acquisition conditions; (iii) methods applied; and (iv) equipment applied. In this scope, the limits of validity of the methods and a comprehensive protocol to achieve the required precision and accuracy for structural analysis is a mandatory task. Application of close-range photogrammetry to build 3D geometric models and for evaluation of displacements are herein presented. Parameters such as distance-to-object, sensor size and focal length, are correlated to the precision and accuracy achieved for displacement in both experimental and on site environment. This paper shows an early stage study. The aim consist in defining simple expressions to estimate the characteristics of the equipment and/or the conditions for image acquisition, depending on the required precision and accuracy. The results will be used to define tentative guidelines considered the all procedure, from image acquisition to final results of coordinates and displacements.

  15. Geochemistry of rare earth elements in a passive treatment system built for acid mine drainage remediation.

    PubMed

    Prudêncio, Maria Isabel; Valente, Teresa; Marques, Rosa; Sequeira Braga, Maria Amália; Pamplona, Jorge

    2015-11-01

    Rare earth elements (REE) were used to assess attenuation processes in a passive system for acid mine drainage treatment (Jales, Portugal). Hydrochemical parameters and REE contents in water, soils and sediments were obtained along the treatment system, after summer and winter. A decrease of REE contents in the water resulting from the interaction with limestone after summer occurs; in the wetlands REE are significantly released by the soil particles to the water. After winter, a higher water dynamics favors the AMD treatment effectiveness and performance since REE contents decrease along the system; La and Ce are preferentially sequestered by ochre sludge but released to the water in the wetlands, influencing the REE pattern of the creek water. Thus, REE fractionation occurs in the passive treatment systems and can be used as tracer to follow up and understand the geochemical processes that promote the remediation of AMD. PMID:26247412

  16. As-built design specification for CAMS Development Dot Data System (CDDDS)

    NASA Technical Reports Server (NTRS)

    Wehmanen, O. A.

    1979-01-01

    The CAMS development dot data system is described. Listings and flow charts of the eight programs used to maintain the data base and the 15 subroutines used in FORTRAN programs to process the data are presented.

  17. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone

    PubMed Central

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-01-01

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user’s physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone. PMID:25996508

  18. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone.

    PubMed

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-01-01

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user's physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone. PMID:25996508

  19. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  20. The ClimaGrowing Footprint of Climate Change: Can Systems Built Today Cope with Tomorrow's Weather Extremes?

    SciTech Connect

    Kintner-Meyer, Michael CW; Kraucunas, Ian P.

    2013-07-11

    This article describes how current climate conditions--with increasingly extreme storms, droughts, and heat waves and their ensuing effects on water quality and levels--are adding stress to an already aging power grid. Moreover, it explains how evaluations of said grid, built upon past weather patterns, are inaqeduate for measuring if the nation's energy systems can cope with future climate changes. The authors make the case for investing in the development of robust, integrated electricity planning tools that account for these climate change factors as a means for enhancing electricity infrastructure resilience.

  1. Humidity compensation of bad-smell sensing system using a detector tube and a built-in camera

    NASA Astrophysics Data System (ADS)

    Hirano, Hiroyuki; Nakamoto, Takamichi

    2011-09-01

    We developed a low-cost sensing system robust against humidity change for detecting and estimating concentration of bad smell, such as hydrogen sulfide and ammonia. In the previous study, we developed automated measurement system for a gas detector tube using a built-in camera instead of the conventional manual inspection of the gas detector tube. Concentration detectable by the developed system ranges from a few tens of ppb to a few tens of ppm. However, we previously found that the estimated concentration depends not only on actual concentration, but on humidity. Here, we established the method to correct the influence of humidity by creating regression function with its inputs of discoloration rate and humidity. We studied 2 methods (Backpropagation, Radial basis function network) to get regression function and evaluated them. Consequently, the system successfully estimated the concentration on a practical level even when humidity changes.

  2. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers

    PubMed Central

    Kühnlenz, Florian; Nardelli, Pedro H. J.

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents’ behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed—lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation. PMID:26730590

  3. Teaching Folder Management System for the Enhancement of Engineering and Built Environment Faculty Program

    ERIC Educational Resources Information Center

    Ab-Rahman, Mohammad Syuhaimi; Mustaffa, Muhamad Azrin Mohd; Abdul, Nasrul Amir; Yusoff, Abdul Rahman Mohd; Hipni, Afiq

    2015-01-01

    A strong, systematic and well-executed management system will be able to minimize and coordinate workload. A number of committees need to be developed, which are joined by the department staffs to achieve the objectives that have been set. Another important aspect is the monitoring department in order to ensure that the work done is correct and in…

  4. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers.

    PubMed

    Kühnlenz, Florian; Nardelli, Pedro H J

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents' behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed-lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation. PMID:26730590

  5. String Mining in Bioinformatics

    NASA Astrophysics Data System (ADS)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word “data-mining” is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  6. String Mining in Bioinformatics

    NASA Astrophysics Data System (ADS)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  7. Microbial bioinformatics 2020.

    PubMed

    Pallen, Mark J

    2016-09-01

    Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! PMID:27471065

  8. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    PubMed Central

    Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the potential advancement of research and development in complex biomedical systems has created a need for an educated workforce in bioinformatics. However, effectively integrating bioinformatics education through formal and informal educational settings has been a challenge due in part to its cross-disciplinary nature. In this article, we seek to provide an overview of the state of bioinformatics education. This article identifies: 1) current approaches of bioinformatics education at the undergraduate and graduate levels; 2) the most common concepts and skills being taught in bioinformatics education; 3) pedagogical approaches and methods of delivery for conveying bioinformatics concepts and skills; and 4) assessment results on the impact of these programs, approaches, and methods in students’ attitudes or learning. Based on these findings, it is our goal to describe the landscape of scholarly work in this area and, as a result, identify opportunities and challenges in bioinformatics education. PMID:25452484

  9. A survey of scholarly literature describing the field of bioinformatics education and bioinformatics educational research.

    PubMed

    Magana, Alejandra J; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the potential advancement of research and development in complex biomedical systems has created a need for an educated workforce in bioinformatics. However, effectively integrating bioinformatics education through formal and informal educational settings has been a challenge due in part to its cross-disciplinary nature. In this article, we seek to provide an overview of the state of bioinformatics education. This article identifies: 1) current approaches of bioinformatics education at the undergraduate and graduate levels; 2) the most common concepts and skills being taught in bioinformatics education; 3) pedagogical approaches and methods of delivery for conveying bioinformatics concepts and skills; and 4) assessment results on the impact of these programs, approaches, and methods in students' attitudes or learning. Based on these findings, it is our goal to describe the landscape of scholarly work in this area and, as a result, identify opportunities and challenges in bioinformatics education. PMID:25452484

  10. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses

    PubMed Central

    Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-01-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations. PMID:27601088

  11. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses.

    PubMed

    Lin, Yu-Pu; Bennett, Christopher H; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-01-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations. PMID:27601088

  12. European built core data management system for the Russian Service Module

    NASA Astrophysics Data System (ADS)

    Brandt, G.; Schneider, W.; Urban, G.; Branets, V.; Reimers, C.

    1996-02-01

    In frame of the European-Russian cooperation within the International Space Station Program a Fault Tolerant Computer assembly with associated crew-control station is under development in European industry. The on-board equipment, which is supposed to be accommodated within the data management and guidance navigation control system of the Russian Service Module, is complemented with a complete set of ground hard- and software required for the application software development and verification process as well as Service Module System integration and check-out. For the on-board fault tolerant computer state of the art technology will be implemented. The mechanism, that provides the required two failure tolerant feature, is based on the so-called Byzantine Failure algorithm. The fault detection and recovery algorithm, the data input/output and the application software are implemented on separate layers each containing its own processing capability, such that the application program runs independent of any failure calculations and data input/output. Hardware realization is based on state of the art microcircuits, in particular transputer processors to provide compact equipment size and fast data exchange mechanism.

  13. Dynamical energy analysis for built-up acoustic systems at high frequencies.

    PubMed

    Chappell, D J; Giani, S; Tanner, G

    2011-09-01

    Standard methods for describing the intensity distribution of mechanical and acoustic wave fields in the high frequency asymptotic limit are often based on flow transport equations. Common techniques are statistical energy analysis, employed mostly in the context of vibro-acoustics, and ray tracing, a popular tool in architectural acoustics. Dynamical energy analysis makes it possible to interpolate between standard statistical energy analysis and full ray tracing, containing both of these methods as limiting cases. In this work a version of dynamical energy analysis based on a Chebyshev basis expansion of the Perron-Frobenius operator governing the ray dynamics is introduced. It is shown that the technique can efficiently deal with multi-component systems overcoming typical geometrical limitations present in statistical energy analysis. Results are compared with state-of-the-art hp-adaptive discontinuous Galerkin finite element simulations. PMID:21895083

  14. Development of a purpose built landfill system for the control of methane emissions from municipal solid waste.

    PubMed

    Yedla, Sudhakar; Parikh, Jyoti K

    2002-01-01

    In the present paper, a new system of purpose built landfill (PBLF) has been proposed for the control of methane emissions from municipal solid waste (MSW), by considering all favourable conditions for improved methane generation in tropical climates. Based on certain theoretical considerations multivariate functional models (MFMs) are developed to estimate methane mitigation and energy generating potential of the proposed system. Comparison was made between the existing waste management system and proposed PBLF system. It has been found that the proposed methodology not only controlled methane emissions to the atmosphere but also could yield considerable energy in terms of landfill gas (LFG). Economic feasibility of the proposed system has been tested by comparing unit cost of waste disposal in conventional as well as PBLF systems. In a case study of MSW management in Mumbai (INDIA), it was found that the unit cost of waste disposal with PBLF system is seven times lesser than that of the conventional waste management system. The proposed system showed promising energy generation potential with production of methane worth of Rs. 244 millions/y ($5.2 million/y). Thus, the new waste management methodology could give an adaptable solution for the conflict between development, environmental degradation and natural resources depletion. PMID:12092759

  15. An Online Bioinformatics Curriculum

    PubMed Central

    Searls, David B.

    2012-01-01

    Online learning initiatives over the past decade have become increasingly comprehensive in their selection of courses and sophisticated in their presentation, culminating in the recent announcement of a number of consortium and startup activities that promise to make a university education on the internet, free of charge, a real possibility. At this pivotal moment it is appropriate to explore the potential for obtaining comprehensive bioinformatics training with currently existing free video resources. This article presents such a bioinformatics curriculum in the form of a virtual course catalog, together with editorial commentary, and an assessment of strengths, weaknesses, and likely future directions for open online learning in this field. PMID:23028269

  16. Bioinformatics and School Biology

    ERIC Educational Resources Information Center

    Dalpech, Roger

    2006-01-01

    The rapidly changing field of bioinformatics is fuelling the need for suitably trained personnel with skills in relevant biological "sub-disciplines" such as proteomics, transcriptomics and metabolomics, etc. But because of the complexity--and sheer weight of data--associated with these new areas of biology, many school teachers feel…

  17. An Arch-Shaped Intraoral Tongue Drive System with Built-in Tongue-Computer Interfacing SoC

    PubMed Central

    Park, Hangue; Ghovanloo, Maysam

    2014-01-01

    We present a new arch-shaped intraoral Tongue Drive System (iTDS) designed to occupy the buccal shelf in the user's mouth. The new arch-shaped iTDS, which will be referred to as the iTDS-2, incorporates a system-on-a-chip (SoC) that amplifies and digitizes the raw magnetic sensor data and sends it wirelessly to an external TDS universal interface (TDS-UI) via an inductive coil or a planar inverted-F antenna. A built-in transmitter (Tx) employs a dual-band radio that operates at either 27 MHz or 432 MHz band, according to the wireless link quality. A built-in super-regenerative receiver (SR-Rx) monitors the wireless link quality and switches the band if the link quality is below a predetermined threshold. An accompanying ultra-low power FPGA generates data packets for the Tx and handles digital control functions. The custom-designed TDS-UI receives raw magnetic sensor data from the iTDS-2, recognizes the intended user commands by the sensor signal processing (SSP) algorithm running in a smartphone, and delivers the classified commands to the target devices, such as a personal computer or a powered wheelchair. We evaluated the iTDS-2 prototype using center-out and maze navigation tasks on two human subjects, which proved its functionality. The subjects' performance with the iTDS-2 was improved by 22% over its predecessor, reported in our earlier publication. PMID:25405513

  18. In the Spotlight: Bioinformatics

    PubMed Central

    Wang, May Dongmei

    2016-01-01

    During 2012, next generation sequencing (NGS) has attracted great attention in the biomedical research community, especially for personalized medicine. Also, third generation sequencing has become available. Therefore, state-of-art sequencing technology and analysis are reviewed in this Bioinformatics spotlight on 2012. Next-generation sequencing (NGS) is high-throughput nucleic acid sequencing technology with wide dynamic range and single base resolution. The full promise of NGS depends on the optimization of NGS platforms, sequence alignment and assembly algorithms, data analytics, novel algorithms for integrating NGS data with existing genomic, proteomic, or metabolomic data, and quantitative assessment of NGS technology in comparing to more established technologies such as microarrays. NGS technology has been predicated to become a cornerstone of personalized medicine. It is argued that NGS is a promising field for motivated young researchers who are looking for opportunities in bioinformatics. PMID:23192635

  19. Phylogenetic trees in bioinformatics

    SciTech Connect

    Burr, Tom L

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  20. [An overview of feature selection algorithm in bioinformatics].

    PubMed

    Li, Xin; Ma, Li; Wang, Jinjia; Zhao, Chun

    2011-04-01

    Feature selection (FS) techniques have become an important tool in bioinformatics field. The core algorithm of it is to select the hidden significant data with low-dimension from high-dimensional data space, and thus to analyse the basic built-in rule of the data. The data of bioinformatics fields are always with high-dimension and small samples, so the research of FS algorithm in the bioinformatics fields has great foreground. In this article, we make the interested reader aware of the possibilities of feature selection, provide basic properties of feature selection techniques, and discuss their uses in the sequence analysis, microarray analysis, mass spectra analysis etc. Finally, the current problems and the prospects of feature selection algorithm in the application of bioinformatics is also discussed. PMID:21604512

  1. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children

    PubMed Central

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Aslı; Sancar, Burcu

    2013-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children’s gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language. PMID:25663828

  2. The potential of translational bioinformatics approaches for pharmacology research.

    PubMed

    Li, Lang

    2015-10-01

    The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of 'omics' to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. PMID:25753093

  3. Bioinformatics-Driven New Immune Target Discovery in Disease.

    PubMed

    Yang, C; Chen, P; Zhang, W; Du, H

    2016-08-01

    Biomolecular network analysis has been widely applied in the discovery of cancer driver genes and molecular mechanism anatomization of many diseases on the genetic level. However, the application of such approach in the potential antigen discovery of autoimmune diseases remains largely unexplored. Here, we describe a previously uncharacterized region, with disease-associated autoantigens, to build antigen networks with three bioinformatics tools, namely NetworkAnalyst, GeneMANIA and ToppGene. First, we identified histone H2AX as an antigen of systemic lupus erythematosus by comparing highly ranked genes from all the built network-derived gene lists, and then a new potential biomarker for Behcet's disease, heat shock protein HSP 90-alpha (HSP90AA1), was further screened out. Moreover, 130 confirmed patients were enrolled and a corresponding enzyme-linked immunosorbent assay, mass spectrum analysis and immunoprecipitation were performed to further confirm the bioinformatics results with real-world clinical samples in succession. Our findings demonstrate that the combination of multiple molecular network approaches is a promising tool to discover new immune targets in diseases. PMID:27226232

  4. Making sense of genomes of parasitic worms: Tackling bioinformatic challenges.

    PubMed

    Korhonen, Pasi K; Young, Neil D; Gasser, Robin B

    2016-01-01

    Billions of people and animals are infected with parasitic worms (helminths). Many of these worms cause diseases that have a major socioeconomic impact worldwide, and are challenging to control because existing treatment methods are often inadequate. There is, therefore, a need to work toward developing new intervention methods, built on a sound understanding of parasitic worms at molecular level, the relationships that they have with their animal hosts and/or the diseases that they cause. Decoding the genomes and transcriptomes of these parasites brings us a step closer to this goal. The key focus of this article is to critically review and discuss bioinformatic tools used for the assembly and annotation of these genomes and transcriptomes, as well as various post-genomic analyses of transcription profiles, biological pathways, synteny, phylogeny, biogeography and the prediction and prioritisation of drug target candidates. Bioinformatic pipelines implemented and established recently provide practical and efficient tools for the assembly and annotation of genomes of parasitic worms, and will be applicable to a wide range of other parasites and eukaryotic organisms. Future research will need to assess the utility of long-read sequence data sets for enhanced genomic assemblies, and develop improved algorithms for gene prediction and post-genomic analyses, to enable comprehensive systems biology explorations of parasitic organisms. PMID:26956711

  5. Efficient azo dye decolorization in a continuous stirred tank reactor (CSTR) with built-in bioelectrochemical system.

    PubMed

    Cui, Min-Hua; Cui, Dan; Gao, Lei; Cheng, Hao-Yi; Wang, Ai-Jie

    2016-10-01

    A continuous stirred tank reactor with built-in bioelectrochemical system (CSTR-BES) was developed for azo dye Alizarin Yellow R (AYR) containing wastewater treatment. The decolorization efficiency (DE) of the CSTR-BES was 97.04±0.06% for 7h with sludge concentration of 3000mg/L and initial AYR concentration of 100mg/L, which was superior to that of the sole CSTR mode (open circuit: 54.87±4.34%) and the sole BES mode (without sludge addition: 91.37±0.44%). The effects of sludge concentration and sodium acetate (NaAc) concentration on azo dye decolorization were investigated. The highest DE of CSTR-BES for 4h was 87.66±2.93% with sludge concentration of 12,000mg/L, NaAc concentration of 2000mg/L and initial AYR concentration of 100mg/L. The results in this study indicated that CSTR-BES could be a practical strategy for upgrading conventional anaerobic facilities against refractory wastewater treatment. PMID:27497830

  6. Computer Simulation of Embryonic Systems: What can a virtual embryo teach us about developmental toxicity? (LA Conference on Computational Biology & Bioinformatics)

    EPA Science Inventory

    This presentation will cover work at EPA under the CSS program for: (1) Virtual Tissue Models built from the known biology of an embryological system and structured to recapitulate key cell signals and responses; (2) running the models with real (in vitro) or synthetic (in silico...

  7. Bioinformatics of prokaryotic RNAs

    PubMed Central

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes. PMID:24755880

  8. Bioinformatics of prokaryotic RNAs.

    PubMed

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes. PMID:24755880

  9. Temporal Patterns in Sheep Fetal Heart Rate Variability Correlate to Systemic Cytokine Inflammatory Response: A Methodological Exploration of Monitoring Potential Using Complex Signals Bioinformatics.

    PubMed

    Herry, Christophe L; Cortes, Marina; Wu, Hau-Tieng; Durosier, Lucien D; Cao, Mingju; Burns, Patrick; Desrochers, André; Fecteau, Gilles; Seely, Andrew J E; Frasch, Martin G

    2016-01-01

    Fetal inflammation is associated with increased risk for postnatal organ injuries. No means of early detection exist. We hypothesized that systemic fetal inflammation leads to distinct alterations of fetal heart rate variability (fHRV). We tested this hypothesis deploying a novel series of approaches from complex signals bioinformatics. In chronically instrumented near-term fetal sheep, we induced an inflammatory response with lipopolysaccharide (LPS) injected intravenously (n = 10) observing it over 54 hours; seven additional fetuses served as controls. Fifty-one fHRV measures were determined continuously every 5 minutes using Continuous Individualized Multi-organ Variability Analysis (CIMVA). CIMVA creates an fHRV measures matrix across five signal-analytical domains, thus describing complementary properties of fHRV. We implemented, validated and tested methodology to obtain a subset of CIMVA fHRV measures that matched best the temporal profile of the inflammatory cytokine IL-6. In the LPS group, IL-6 peaked at 3 hours. For the LPS, but not control group, a sharp increase in standardized difference in variability with respect to baseline levels was observed between 3 h and 6 h abating to baseline levels, thus tracking closely the IL-6 inflammatory profile. We derived fHRV inflammatory index (FII) consisting of 15 fHRV measures reflecting the fetal inflammatory response with prediction accuracy of 90%. Hierarchical clustering validated the selection of 14 out of 15 fHRV measures comprising FII. We developed methodology to identify a distinctive subset of fHRV measures that tracks inflammation over time. The broader potential of this bioinformatics approach is discussed to detect physiological responses encoded in HRV measures. PMID:27100089

  10. Molecular characterization and bioinformatics analysis of Ncoa7B, a novel ovulation-associated and reproduction system-specific Ncoa7 isoform.

    PubMed

    Shkolnik, Ketty; Ben-Dor, Shifra; Galiani, Dalia; Hourvitz, Ariel; Dekel, Nava

    2008-03-01

    In the present work, we employed bioinformatics search tools to select ovulation-associated cDNA clones with a preference for those representing putative novel genes. Detailed characterization of one of these transcripts, 6C3, by real-time PCR and RACE analyses led to identification of a novel ovulation-associated gene, designated Ncoa7B. This gene was found to exhibit a significant homology to the Ncoa7 gene that encodes a conserved tissue-specific nuclear receptor coactivator. Unlike Ncoa7, Ncoa7B possesses a unique and highly conserved exon at the 5' end and encodes a protein with a unique N-terminal sequence. Extensive bioinformatics analysis has revealed that Ncoa7B has one identifiable domain, TLDc, which has recently been suggested to be involved in protection from oxidative DNA damage. An alignment of TLDc domain containing proteins was performed, and the closest relative identified was OXR1, which also has a corresponding, highly related short isoform, with just a TLDc domain. Moreover, Ncoa7B expression, as seen to date, seems to be restricted to mammals, while other TLDc family members have no such restriction. Multiple tissue analysis revealed that unlike Ncoa7, which was abundant in a variety of tissues with the highest expression in the brain, Ncoa7B mRNA expression is restricted to the reproductive system organs, particularly the uterus and the ovary. The ovarian expression of Ncoa7B was stimulated by human chorionic gonadotropin. Additionally, using real-time PCR, we demonstrated the involvement of multiple signaling pathways for Ncoa7B expression on preovulatory follicles. PMID:18299425

  11. Temporal Patterns in Sheep Fetal Heart Rate Variability Correlate to Systemic Cytokine Inflammatory Response: A Methodological Exploration of Monitoring Potential Using Complex Signals Bioinformatics

    PubMed Central

    Wu, Hau-Tieng; Durosier, Lucien D.; Desrochers, André; Fecteau, Gilles; Seely, Andrew J. E.; Frasch, Martin G.

    2016-01-01

    Fetal inflammation is associated with increased risk for postnatal organ injuries. No means of early detection exist. We hypothesized that systemic fetal inflammation leads to distinct alterations of fetal heart rate variability (fHRV). We tested this hypothesis deploying a novel series of approaches from complex signals bioinformatics. In chronically instrumented near-term fetal sheep, we induced an inflammatory response with lipopolysaccharide (LPS) injected intravenously (n = 10) observing it over 54 hours; seven additional fetuses served as controls. Fifty-one fHRV measures were determined continuously every 5 minutes using Continuous Individualized Multi-organ Variability Analysis (CIMVA). CIMVA creates an fHRV measures matrix across five signal-analytical domains, thus describing complementary properties of fHRV. We implemented, validated and tested methodology to obtain a subset of CIMVA fHRV measures that matched best the temporal profile of the inflammatory cytokine IL-6. In the LPS group, IL-6 peaked at 3 hours. For the LPS, but not control group, a sharp increase in standardized difference in variability with respect to baseline levels was observed between 3 h and 6 h abating to baseline levels, thus tracking closely the IL-6 inflammatory profile. We derived fHRV inflammatory index (FII) consisting of 15 fHRV measures reflecting the fetal inflammatory response with prediction accuracy of 90%. Hierarchical clustering validated the selection of 14 out of 15 fHRV measures comprising FII. We developed methodology to identify a distinctive subset of fHRV measures that tracks inflammation over time. The broader potential of this bioinformatics approach is discussed to detect physiological responses encoded in HRV measures. PMID:27100089

  12. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  13. The influence of the built environment on outcomes from a "walking school bus study": a cross-sectional analysis using geographical information systems.

    PubMed

    Oreskovic, Nicolas M; Blossom, Jeff; Robinson, Alyssa I; Chen, Minghua L; Uscanga, Doris K; Mendoza, Jason A

    2014-11-01

    Active commuting to school increases children's daily physical activity. The built environment is associated with children's physical activity levels in cross-sectional studies. This study examined the role of the built environment on the outcomes of a "walking school bus" study. Geographical information systems was used to map out and compare the built environments around schools participating in a pilot walking school bus randomised controlled trial, as well as along school routes. Multi-level modelling was used to determine the built environment attributes associated with the outcomes of active commuting to school and accelerometer-determined moderate-to-vigorous physical activity (MPVA). There were no differences in the surrounding built environments of control (n = 4) and intervention (n = 4) schools participating in the walking school bus study. Among school walking routes, park space was inversely associated with active commuting to school (β = -0.008, SE = 0.004, P = 0.03), while mixed-land use was positively associated with daily MPVA (β = 60.0, SE = 24.3, P = 0.02). There was effect modification such that high traffic volume and high street connectivity were associated with greater moderate-to-vigorous physical activity. The results of this study suggest that the built environment may play a role in active school commuting outcomes and daily physical activity. PMID:25545924

  14. The influence of the built environment on outcomes from a “walking school bus study”: a cross-sectional analysis using geographical information systems

    PubMed Central

    Oreskovic, Nicolas M.; Blossom, Jeff; Robinson, Alyssa I.; Chen, Minghua L.; Uscanga, Doris K.; Mendoza, Jason A.

    2015-01-01

    Active commuting to school increases children’s daily physical activity. The built environment is associated with children’s physical activity levels in cross-sectional studies. This study examined the role of the built environment on the outcomes of a “walking school bus” study. Geographical information systems was used to map out and compare the built environments around schools participating in a pilot walking school bus randomised controlled trial, as well as along school routes. Multi-level modelling was used to determine the built environment attributes associated with the outcomes of active commuting to school and accelerometer-determined moderate-to-vigorous physical activity (MPVA). There were no differences in the surrounding built environments of control (n = 4) and intervention (n = 4) schools participating in the walking school bus study. Among school walking routes, park space was inversely associated with active commuting to school (β = −0.008, SE = 0.004, P = 0.03), while mixed-land use was positively associated with daily MPVA (β = 60.0, SE = 24.3, P = 0.02). There was effect modification such that high traffic volume and high street connectivity were associated with greater moderate-to-vigorous physical activity. The results of this study suggest that the built environment may play a role in active school commuting outcomes and daily physical activity. PMID:25545924

  15. Bioinformatics-Aided Venomics

    PubMed Central

    Kaas, Quentin; Craik, David J.

    2015-01-01

    Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future. PMID:26110505

  16. Comprehensive Decision Tree Models in Bioinformatics

    PubMed Central

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class

  17. Virtual Bioinformatics Distance Learning Suite

    ERIC Educational Resources Information Center

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  18. Channelrhodopsins: a bioinformatics perspective.

    PubMed

    Del Val, Coral; Royuela-Flor, José; Milenkovic, Stefan; Bondar, Ana-Nicoleta

    2014-05-01

    Channelrhodopsins are microbial-type rhodopsins that function as light-gated cation channels. Understanding how the detailed architecture of the protein governs its dynamics and specificity for ions is important, because it has the potential to assist in designing site-directed channelrhodopsin mutants for specific neurobiology applications. Here we use bioinformatics methods to derive accurate alignments of channelrhodopsin sequences, assess the sequence conservation patterns and find conserved motifs in channelrhodopsins, and use homology modeling to construct three-dimensional structural models of channelrhodopsins. The analyses reveal that helices C and D of channelrhodopsins contain Cys, Ser, and Thr groups that can engage in both intra- and inter-helical hydrogen bonds. We propose that these polar groups participate in inter-helical hydrogen-bonding clusters important for the protein conformational dynamics and for the local water interactions. This article is part of a Special Issue entitled: Retinal Proteins - You can teach an old dog new tricks. PMID:24252597

  19. Bioinformatics and Moonlighting Proteins

    PubMed Central

    Hernández, Sergio; Franco, Luís; Calvo, Alejandra; Ferragut, Gabriela; Hermoso, Antoni; Amela, Isaac; Gómez, Antonio; Querol, Enrique; Cedano, Juan

    2015-01-01

    Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyze and describe several approaches that use sequences, structures, interactomics, and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are (a) remote homology searches using Psi-Blast, (b) detection of functional motifs and domains, (c) analysis of data from protein–protein interaction databases (PPIs), (d) match the query protein sequence to 3D databases (i.e., algorithms as PISITE), and (e) mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs) has the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations – it requires the existence of multialigned family protein sequences – but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/), previously published by our group, has been used as a benchmark for the all of the analyses. PMID:26157797

  20. Bioinformatics and Moonlighting Proteins.

    PubMed

    Hernández, Sergio; Franco, Luís; Calvo, Alejandra; Ferragut, Gabriela; Hermoso, Antoni; Amela, Isaac; Gómez, Antonio; Querol, Enrique; Cedano, Juan

    2015-01-01

    Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyze and describe several approaches that use sequences, structures, interactomics, and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are (a) remote homology searches using Psi-Blast, (b) detection of functional motifs and domains, (c) analysis of data from protein-protein interaction databases (PPIs), (d) match the query protein sequence to 3D databases (i.e., algorithms as PISITE), and (e) mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs) has the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations - it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/), previously published by our group, has been used as a benchmark for the all of the analyses. PMID:26157797

  1. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  2. ncRDeathDB: A comprehensive bioinformatics resource for deciphering network organization of the ncRNA-mediated cell death system.

    PubMed

    Wu, Deng; Huang, Yan; Kang, Juanjuan; Li, Kongning; Bi, Xiaoman; Zhang, Ting; Jin, Nana; Hu, Yongfei; Tan, Puwen; Zhang, Lu; Yi, Ying; Shen, Wenjun; Huang, Jian; Li, Xiaobo; Li, Xia; Xu, Jianzhen; Wang, Dong

    2015-01-01

    Programmed cell death (PCD) is a critical biological process involved in many important processes, and defects in PCD have been linked with numerous human diseases. In recent years, the protein architecture in different PCD subroutines has been explored, but our understanding of the global network organization of the noncoding RNA (ncRNA)-mediated cell death system is limited and ambiguous. Hence, we developed the comprehensive bioinformatics resource (ncRDeathDB, www.rna-society.org/ncrdeathdb ) to archive ncRNA-associated cell death interactions. The current version of ncRDeathDB documents a total of more than 4600 ncRNA-mediated PCD entries in 12 species. ncRDeathDB provides a user-friendly interface to query, browse and manipulate these ncRNA-associated cell death interactions. Furthermore, this resource will help to visualize and navigate current knowledge of the noncoding RNA component of cell death and autophagy, to uncover the generic organizing principles of ncRNA-associated cell death systems, and to generate valuable biological hypotheses. PMID:26431463

  3. Combining chemoinformatics with bioinformatics: in silico prediction of bacterial flavor-forming pathways by a chemical systems biology approach "reverse pathway engineering".

    PubMed

    Liu, Mengjin; Bienfait, Bruno; Sacher, Oliver; Gasteiger, Johann; Siezen, Roland J; Nauta, Arjen; Geurts, Jan M W

    2014-01-01

    The incompleteness of genome-scale metabolic models is a major bottleneck for systems biology approaches, which are based on large numbers of metabolites as identified and quantified by metabolomics. Many of the revealed secondary metabolites and/or their derivatives, such as flavor compounds, are non-essential in metabolism, and many of their synthesis pathways are unknown. In this study, we describe a novel approach, Reverse Pathway Engineering (RPE), which combines chemoinformatics and bioinformatics analyses, to predict the "missing links" between compounds of interest and their possible metabolic precursors by providing plausible chemical and/or enzymatic reactions. We demonstrate the added-value of the approach by using flavor-forming pathways in lactic acid bacteria (LAB) as an example. Established metabolic routes leading to the formation of flavor compounds from leucine were successfully replicated. Novel reactions involved in flavor formation, i.e. the conversion of alpha-hydroxy-isocaproate to 3-methylbutanoic acid and the synthesis of dimethyl sulfide, as well as the involved enzymes were successfully predicted. These new insights into the flavor-formation mechanisms in LAB can have a significant impact on improving the control of aroma formation in fermented food products. Since the input reaction databases and compounds are highly flexible, the RPE approach can be easily extended to a broad spectrum of applications, amongst others health/disease biomarker discovery as well as synthetic biology. PMID:24416282

  4. Combining Chemoinformatics with Bioinformatics: In Silico Prediction of Bacterial Flavor-Forming Pathways by a Chemical Systems Biology Approach “Reverse Pathway Engineering”

    PubMed Central

    Liu, Mengjin; Bienfait, Bruno; Sacher, Oliver; Gasteiger, Johann; Siezen, Roland J.; Nauta, Arjen; Geurts, Jan M. W.

    2014-01-01

    The incompleteness of genome-scale metabolic models is a major bottleneck for systems biology approaches, which are based on large numbers of metabolites as identified and quantified by metabolomics. Many of the revealed secondary metabolites and/or their derivatives, such as flavor compounds, are non-essential in metabolism, and many of their synthesis pathways are unknown. In this study, we describe a novel approach, Reverse Pathway Engineering (RPE), which combines chemoinformatics and bioinformatics analyses, to predict the “missing links” between compounds of interest and their possible metabolic precursors by providing plausible chemical and/or enzymatic reactions. We demonstrate the added-value of the approach by using flavor-forming pathways in lactic acid bacteria (LAB) as an example. Established metabolic routes leading to the formation of flavor compounds from leucine were successfully replicated. Novel reactions involved in flavor formation, i.e. the conversion of alpha-hydroxy-isocaproate to 3-methylbutanoic acid and the synthesis of dimethyl sulfide, as well as the involved enzymes were successfully predicted. These new insights into the flavor-formation mechanisms in LAB can have a significant impact on improving the control of aroma formation in fermented food products. Since the input reaction databases and compounds are highly flexible, the RPE approach can be easily extended to a broad spectrum of applications, amongst others health/disease biomarker discovery as well as synthetic biology. PMID:24416282

  5. Translational bioinformatics in psychoneuroimmunology: methods and applications.

    PubMed

    Yan, Qing

    2012-01-01

    Translational bioinformatics plays an indispensable role in transforming psychoneuroimmunology (PNI) into personalized medicine. It provides a powerful method to bridge the gaps between various knowledge domains in PNI and systems biology. Translational bioinformatics methods at various systems levels can facilitate pattern recognition, and expedite and validate the discovery of systemic biomarkers to allow their incorporation into clinical trials and outcome assessments. Analysis of the correlations between genotypes and phenotypes including the behavioral-based profiles will contribute to the transition from the disease-based medicine to human-centered medicine. Translational bioinformatics would also enable the establishment of predictive models for patient responses to diseases, vaccines, and drugs. In PNI research, the development of systems biology models such as those of the neurons would play a critical role. Methods based on data integration, data mining, and knowledge representation are essential elements in building health information systems such as electronic health records and computerized decision support systems. Data integration of genes, pathophysiology, and behaviors are needed for a broad range of PNI studies. Knowledge discovery approaches such as network-based systems biology methods are valuable in studying the cross-talks among pathways in various brain regions involved in disorders such as Alzheimer's disease. PMID:22933157

  6. MEMOSys: Bioinformatics platform for genome-scale metabolic models

    PubMed Central

    2011-01-01

    Background Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. Results MEMOSys (MEtabolic MOdel research and development System) is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. Conclusions We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys. PMID:21276275

  7. Bioinformatics in protein analysis.

    PubMed

    Persson, B

    2000-01-01

    The chapter gives an overview of bioinformatic techniques of importance in protein analysis. These include database searches, sequence comparisons and structural predictions. Links to useful World Wide Web (WWW) pages are given in relation to each topic. Databases with biological information are reviewed with emphasis on databases for nucleotide sequences (EMBL, GenBank, DDBJ), genomes, amino acid sequences (Swissprot, PIR, TrEMBL, GenePept), and three-dimensional structures (PDB). Integrated user interfaces for databases (SRS and Entrez) are described. An introduction to databases of sequence patterns and protein families is also given (Prosite, Pfam, Blocks). Furthermore, the chapter describes the widespread methods for sequence comparisons, FASTA and BLAST, and the corresponding WWW services. The techniques involving multiple sequence alignments are also reviewed: alignment creation with the Clustal programs, phylogenetic tree calculation with the Clustal or Phylip packages and tree display using Drawtree, njplot or phylo_win. Finally, the chapter also treats the issue of structural prediction. Different methods for secondary structure predictions are described (Chou-Fasman, Garnier-Osguthorpe-Robson, Predator, PHD). Techniques for predicting membrane proteins, antigenic sites and postranslational modifications are also reviewed. PMID:10803381

  8. Global computing for bioinformatics.

    PubMed

    Loewe, Laurence

    2002-12-01

    Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster. PMID:12511066

  9. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    PubMed Central

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  10. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    PubMed

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  11. Development of a 3D Underground Cadastral System with Indoor Mapping for As-Built BIM: The Case Study of Gangnam Subway Station in Korea

    PubMed Central

    Kim, Sangmin; Kim, Jeonghyun; Jung, Jaehoon; Heo, Joon

    2015-01-01

    The cadastral system provides land ownership information by registering and representing land boundaries on a map. The current cadastral system in Korea, however, focuses mainly on the management of 2D land-surface boundaries. It is not yet possible to provide efficient or reliable land administration, as this 2D system cannot support or manage land information on 3D properties (including architectures and civil infrastructures) for both above-ground and underground facilities. A geometrical model of the 3D parcel, therefore, is required for registration of 3D properties. This paper, considering the role of the cadastral system, proposes a framework for a 3D underground cadastral system that can register various types of 3D underground properties using indoor mapping for as-built Building Information Modeling (BIM). The implementation consists of four phases: (1) geometric modeling of a real underground infrastructure using terrestrial laser scanning data; (2) implementation of as-built BIM based on geometric modeling results; (3) accuracy assessment for created as-built BIM using reference points acquired by total station; and (4) creation of three types of 3D underground cadastral map to represent underground properties. The experimental results, based on indoor mapping for as-built BIM, show that the proposed framework for a 3D underground cadastral system is able to register the rights, responsibilities, and restrictions corresponding to the 3D underground properties. In this way, clearly identifying the underground physical situation enables more reliable and effective decision-making in all aspects of the national land administration system. PMID:26690174

  12. Development of a 3D Underground Cadastral System with Indoor Mapping for As-Built BIM: The Case Study of Gangnam Subway Station in Korea.

    PubMed

    Kim, Sangmin; Kim, Jeonghyun; Jung, Jaehoon; Heo, Joon

    2015-01-01

    The cadastral system provides land ownership information by registering and representing land boundaries on a map. The current cadastral system in Korea, however, focuses mainly on the management of 2D land-surface boundaries. It is not yet possible to provide efficient or reliable land administration, as this 2D system cannot support or manage land information on 3D properties (including architectures and civil infrastructures) for both above-ground and underground facilities. A geometrical model of the 3D parcel, therefore, is required for registration of 3D properties. This paper, considering the role of the cadastral system, proposes a framework for a 3D underground cadastral system that can register various types of 3D underground properties using indoor mapping for as-built Building Information Modeling (BIM). The implementation consists of four phases: (1) geometric modeling of a real underground infrastructure using terrestrial laser scanning data; (2) implementation of as-built BIM based on geometric modeling results; (3) accuracy assessment for created as-built BIM using reference points acquired by total station; and (4) creation of three types of 3D underground cadastral map to represent underground properties. The experimental results, based on indoor mapping for as-built BIM, show that the proposed framework for a 3D underground cadastral system is able to register the rights, responsibilities, and restrictions corresponding to the 3D underground properties. In this way, clearly identifying the underground physical situation enables more reliable and effective decision-making in all aspects of the national land administration system. PMID:26690174

  13. Survey of MapReduce frame operation in bioinformatics.

    PubMed

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. PMID:23396756

  14. VLSI Microsystem for Rapid Bioinformatic Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Lue, Jaw-Chyng

    2009-01-01

    A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).

  15. The Genome Sequencer FLX System--longer reads, more applications, straight forward bioinformatics and more complete data sets.

    PubMed

    Droege, Marcus; Hill, Brendon

    2008-08-31

    The Genome Sequencer FLX System (GS FLX), powered by 454 Sequencing, is a next-generation DNA sequencing technology featuring a unique mix of long reads, exceptional accuracy, and ultra-high throughput. It has been proven to be the most versatile of all currently available next-generation sequencing technologies, supporting many high-profile studies in over seven applications categories. GS FLX users have pursued innovative research in de novo sequencing, re-sequencing of whole genomes and target DNA regions, metagenomics, and RNA analysis. 454 Sequencing is a powerful tool for human genetics research, having recently re-sequenced the genome of an individual human, currently re-sequencing the complete human exome and targeted genomic regions using the NimbleGen sequence capture process, and detected low-frequency somatic mutations linked to cancer. PMID:18616967

  16. Autophagy Regulatory Network — A systems-level bioinformatics resource for studying the mechanism and regulation of autophagy

    PubMed Central

    Türei, Dénes; Földvári-Nagy, László; Fazekas, Dávid; Módos, Dezső; Kubisch, János; Kadlecsik, Tamás; Demeter, Amanda; Lenti, Katalin; Csermely, Péter; Vellai, Tibor; Korcsmáros, Tamás

    2015-01-01

    Autophagy is a complex cellular process having multiple roles, depending on tissue, physiological, or pathological conditions. Major post-translational regulators of autophagy are well known, however, they have not yet been collected comprehensively. The precise and context-dependent regulation of autophagy necessitates additional regulators, including transcriptional and post-transcriptional components that are listed in various datasets. Prompted by the lack of systems-level autophagy-related information, we manually collected the literature and integrated external resources to gain a high coverage autophagy database. We developed an online resource, Autophagy Regulatory Network (ARN; http://autophagy-regulation.org), to provide an integrated and systems-level database for autophagy research. ARN contains manually curated, imported, and predicted interactions of autophagy components (1,485 proteins with 4,013 interactions) in humans. We listed 413 transcription factors and 386 miRNAs that could regulate autophagy components or their protein regulators. We also connected the above-mentioned autophagy components and regulators with signaling pathways from the SignaLink 2 resource. The user-friendly website of ARN allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. ARN has the potential to facilitate the experimental validation of novel autophagy components and regulators. In addition, ARN helps the investigation of transcription factors, miRNAs and signaling pathways implicated in the control of the autophagic pathway. The list of such known and predicted regulators could be important in pharmacological attempts against cancer and neurodegenerative diseases. PMID:25635527

  17. Adapting bioinformatics curricula for big data.

    PubMed

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  18. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  19. Chapter 16: text mining for translational bioinformatics.

    PubMed

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing. PMID:23633944

  20. Bioinformatic pipelines in Python with Leaf

    PubMed Central

    2013-01-01

    Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315

  1. Chapter 16: Text Mining for Translational Bioinformatics

    PubMed Central

    Cohen, K. Bretonnel; Hunter, Lawrence E.

    2013-01-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research—translating basic science results into new interventions—and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing. PMID:23633944

  2. Generations of interdisciplinarity in bioinformatics

    PubMed Central

    Bartlett, Andrew; Lewis, Jamie; Williams, Matthew L.

    2016-01-01

    Bioinformatics, a specialism propelled into relevance by the Human Genome Project and the subsequent -omic turn in the life science, is an interdisciplinary field of research. Qualitative work on the disciplinary identities of bioinformaticians has revealed the tensions involved in work in this “borderland.” As part of our ongoing work on the emergence of bioinformatics, between 2010 and 2011, we conducted a survey of United Kingdom-based academic bioinformaticians. Building on insights drawn from our fieldwork over the past decade, we present results from this survey relevant to a discussion of disciplinary generation and stabilization. Not only is there evidence of an attitudinal divide between the different disciplinary cultures that make up bioinformatics, but there are distinctions between the forerunners, founders and the followers; as inter/disciplines mature, they face challenges that are both inter-disciplinary and inter-generational in nature. PMID:27453689

  3. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, James L.

    1995-01-01

    A laser initiated ordnance controller apparatus which provides a safe and m scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition.

  4. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, J.L.

    1995-04-11

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition. 6 figures.

  5. No moving parts safe and arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    SciTech Connect

    Hendrix, J.L.

    1994-12-31

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe and arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe and arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activated the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel.

  6. The vibro-acoustic analysis of built-up systems using a hybrid method with parametric and non-parametric uncertainties

    NASA Astrophysics Data System (ADS)

    Cicirello, Alice; Langley, Robin S.

    2013-04-01

    An existing hybrid finite element (FE)/statistical energy analysis (SEA) approach to the analysis of the mid- and high frequency vibrations of a complex built-up system is extended here to a wider class of uncertainty modeling. In the original approach, the constituent parts of the system are considered to be either deterministic, and modeled using FE, or highly random, and modeled using SEA. A non-parametric model of randomness is employed in the SEA components, based on diffuse wave theory and the Gaussian Orthogonal Ensemble (GOE), and this enables the mean and variance of second order quantities such as vibrational energy and response cross-spectra to be predicted. In the present work the assumption that the FE components are deterministic is relaxed by the introduction of a parametric model of uncertainty in these components. The parametric uncertainty may be modeled either probabilistically, or by using a non-probabilistic approach such as interval analysis, and it is shown how these descriptions can be combined with the non-parametric uncertainty in the SEA subsystems to yield an overall assessment of the performance of the system. The method is illustrated by application to an example built-up plate system which has random properties, and benchmark comparisons are made with full Monte Carlo simulations.

  7. The 2015 Bioinformatics Open Source Conference (BOSC 2015)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J. A.; Lapp, Hilmar

    2016-01-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included “Data Science;” “Standards and Interoperability;” “Open Science and Reproducibility;” “Translational Bioinformatics;” “Visualization;” and “Bioinformatics Open Source Project Updates”. In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled “Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community,” that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  8. The Space Launch System -The Biggest, Most Capable Rocket Ever Built, for Entirely New Human Exploration Missions Beyond Earth's Orbit

    NASA Technical Reports Server (NTRS)

    Shivers, C. Herb

    2012-01-01

    NASA is developing the Space Launch System -- an advanced heavy-lift launch vehicle that will provide an entirely new capability for human exploration beyond Earth's orbit. The Space Launch System will provide a safe, affordable and sustainable means of reaching beyond our current limits and opening up new discoveries from the unique vantage point of space. The first developmental flight, or mission, is targeted for the end of 2017. The Space Launch System, or SLS, will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment and science experiments to Earth's orbit and destinations beyond. Additionally, the SLS will serve as a backup for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle Program and the Constellation Program in order to take advantage of proven hardware and cutting-edge tooling and manufacturing technology that will significantly reduce development and operations costs. The rocket will use a liquid hydrogen and liquid oxygen propulsion system, which will include the RS-25D/E from the Space Shuttle Program for the core stage and the J-2X engine for the upper stage. SLS will also use solid rocket boosters for the initial development flights, while follow-on boosters will be competed based on performance requirements and affordability considerations.

  9. The Resilience of Structure Built around the Predicate: Homesign Gesture Systems in Turkish and American Deaf Children

    ERIC Educational Resources Information Center

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Asli; Sancar, Burcu

    2015-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called "homesigns", which have many of the properties of natural language--the so-called resilient properties of language. We explored the resilience of structure built…

  10. The detection method for small molecules coupled with a molecularly imprinted polymer/quantum dot chip using a home-built optical system.

    PubMed

    Liu, Yixi; Wang, Yong; Liu, Le; He, Yonghong; He, Qinghua; Ji, Yanhong

    2016-07-01

    A method to detect small molecules with a molecularly imprinted polymer/quantum dot (MIP-QD) chip using a home-built optical fluidic system was first proposed in this study. Ractopamine (RAC) was used as the model molecule to demonstrate its feasibility. The sensing of the target molecule is based on the quenching amount of the quantum dots. The method is facile, cost-saving, easy for miniaturization and avoids the cumbersome steps that are needed to get the fluorescent quenching curve using a spectrofluorometer. Most importantly, more details and accurate response time can be obtained by use of this method. The experimental results show that the prepared chips with low cost are highly selective and the home-built detection system allows the fast binding kinetics. The recorded quenching process was used to study the kinetic uptake of RAC onto the MIP-QD chip and the specificity towards RAC. The system can further be utilized to study the effect of the solvent, pH and temperature on the selectivity of the prepared MIP. The methodology could be extended to other similar studies with different molecules. Graphical abstract Schematic illustration of the molecularly imprinted polymer/quantum dot chip capturing the target molecule. PMID:27235159

  11. Towards an International Planetary Community Built on Open Source Software: the Evolution of the Planetary Data System

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Ramirez, P.; Hardman, S.; Hughes, J. S.

    2012-12-01

    Access to the worldwide planetary science research results from robotic exploration of the solar system has become a key driver in internationalizing the data standards from the Planetary Data System. The Planetary Data System, through international agency collaborations with the International Planetary Data Alliance (IPDA), has been developing a next generation set of data standards and technical implementation known as PDS4. PDS4 modernizes the PDS towards a world-wide online data system providing data and technical standards for improving access and interoperability among planetary archives. Since 2006, the IPDA has been working with the PDS to ensure that the next generation PDS is capable of allowing agency autonomy in building compatible archives while providing mechanisms to link the archive together. At the 7th International Planetary Data Alliance (IPDA) Meeting in Bangalore, India, the IPDA discussed and passed a resolution paving the way to adopt the PDS4 data standards. While the PDS4 standards have matured, another effort has been underway to move the PDS, a set of distributed discipline oriented science nodes, into a fully, online, service-oriented architecture. In order to accomplish this goal, the PDS has been developing a core set of software components that form the basis for many of the functions needed by a data system. These include the ability to harvest, validate, register, search and distribute the data products defined by the PDS4 data standards. Rather than having each group build their own independent implementations, the intention is to ultimately govern the implementation of this software through an open source community. This will enable not only sharing of software among U.S. planetary science nodes, but also has the potential of improving collaboration not only on core data management software, but also the tools by the international community. This presentation will discuss the progress in developing an open source infrastructure

  12. Mathematics and evolutionary biology make bioinformatics education comprehensible

    PubMed Central

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  13. Bioinformatics and the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  14. Reproducible Bioinformatics Research for Biologists

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  15. Visualising "Junk" DNA through Bioinformatics

    ERIC Educational Resources Information Center

    Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia

    2005-01-01

    One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…

  16. Highly Flexible Home-built ND:YVO4 Modelocked Laser System for Trapped Ion Qubit Raman Transitions

    NASA Astrophysics Data System (ADS)

    Sakrejda, Tomasz; Wright, John; Graham, Richard; Zhou, Zichao; Blinov, Boris

    2014-05-01

    A passively mode-locked ND:YVO4 laser system for driving Raman transitions in Ba+ and Yb+ is constructed and evaluated. Based on on a commercial CW laser platform, we make straightforward modifications to the cavity to effect passive mode locking. With 20 W of 808 nm diode pump light, we achieve over 4 W 1064 nm output power, 150 MHz repetition rate, and 17 ps pulse duration. Laser cavity parameters can be easily modified to facilitate changes in pulse duration or repetition rate. Stable mode locking is achieved at start-up with no perturbations to the cavity resonator. The output 1064 nm light can frequency-doubled in an external LBO crystal to generate up to 130 mW of 532 nm light in a single pass. The 532 nm light is close enough to the 493 nm line in Ba+ to drive ground state qubit flips with a single laser pulse. We plan to use this laser to drive qubit gates in both Ba 138+ and, with a third harmonic (355 nm) generation system, in 171Yb+. Supported by MUSIQC/IARPA.

  17. Natural and built environmental exposures on children's active school travel: A Dutch global positioning system-based cross-sectional study.

    PubMed

    Helbich, Marco; Emmichoven, Maarten J Zeylmans van; Dijst, Martin J; Kwan, Mei-Po; Pierik, Frank H; Vries, Sanne I de

    2016-05-01

    Physical inactivity among children is on the rise. Active transport to school (ATS), namely walking and cycling there, adds to children's activity level. Little is known about how exposures along actual routes influence children's transport behavior. This study examined how natural and built environments influence mode choice among Dutch children aged 6-11 years. 623 school trips were tracked with global positioning system. Natural and built environmental exposures were determined by means of a geographic information system and their associations with children's active/passive mode choice were analyzed using mixed models. The actual commuted distance is inversely associated with ATS when only personal, traffic safety, and weather features are considered. When the model is adjusted for urban environments, the results are reversed and distance is no longer significant, whereas well-connected streets and cycling lanes are positively associated with ATS. Neither green space nor weather is significant. As distance is not apparent as a constraining travel determinant when moving through urban landscapes, planning authorities should support children's ATS by providing well-designed cities. PMID:27010106

  18. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    PubMed Central

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  19. Development of kinematic 3D laser scanning system for indoor mapping and as-built BIM using constrained SLAM.

    PubMed

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  20. An agent-based multilayer architecture for bioinformatics grids.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Cannata, Nicola; Corradini, Flavio; Merelli, Emanuela; Milanesi, Luciano; Romano, Paolo

    2007-06-01

    Due to the huge volume and complexity of biological data available today, a fundamental component of biomedical research is now in silico analysis. This includes modelling and simulation of biological systems and processes, as well as automated bioinformatics analysis of high-throughput data. The quest for bioinformatics resources (including databases, tools, and knowledge) becomes therefore of extreme importance. Bioinformatics itself is in rapid evolution and dedicated Grid cyberinfrastructures already offer easier access and sharing of resources. Furthermore, the concept of the Grid is progressively interleaving with those of Web Services, semantics, and software agents. Agent-based systems can play a key role in learning, planning, interaction, and coordination. Agents constitute also a natural paradigm to engineer simulations of complex systems like the molecular ones. We present here an agent-based, multilayer architecture for bioinformatics Grids. It is intended to support both the execution of complex in silico experiments and the simulation of biological systems. In the architecture a pivotal role is assigned to an "alive" semantic index of resources, which is also expected to facilitate users' awareness of the bioinformatics domain. PMID:17695749

  1. Mobyle: a new full web bioinformatics framework

    PubMed Central

    Néron, Bertrand; Ménager, Hervé; Maufrais, Corinne; Joly, Nicolas; Maupetit, Julien; Letort, Sébastien; Carrere, Sébastien; Tuffery, Pierre; Letondal, Catherine

    2009-01-01

    Motivation: For the biologist, running bioinformatics analyses involves a time-consuming management of data and tools. Users need support to organize their work, retrieve parameters and reproduce their analyses. They also need to be able to combine their analytic tools using a safe data flow software mechanism. Finally, given that scientific tools can be difficult to install, it is particularly helpful for biologists to be able to use these tools through a web user interface. However, providing a web interface for a set of tools raises the problem that a single web portal cannot offer all the existing and possible services: it is the user, again, who has to cope with data copy among a number of different services. A framework enabling portal administrators to build a network of cooperating services would therefore clearly be beneficial. Results: We have designed a system, Mobyle, to provide a flexible and usable Web environment for defining and running bioinformatics analyses. It embeds simple yet powerful data management features that allow the user to reproduce analyses and to combine tools using a hierarchical typing system. Mobyle offers invocation of services distributed over remote Mobyle servers, thus enabling a federated network of curated bioinformatics portals without the user having to learn complex concepts or to install sophisticated software. While being focused on the end user, the Mobyle system also addresses the need, for the bioinfomatician, to automate remote services execution: PlayMOBY is a companion tool that automates the publication of BioMOBY web services, using Mobyle program definitions. Availability: The Mobyle system is distributed under the terms of the GNU GPLv2 on the project web site (http://bioweb2.pasteur.fr/projects/mobyle/). It is already deployed on three servers: http://mobyle.pasteur.fr, http://mobyle.rpbs.univ-paris-diderot.fr and http://lipm-bioinfo.toulouse.inra.fr/Mobyle. The PlayMOBY companion is distributed under the

  2. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  3. Bioinformatics in the information age

    SciTech Connect

    Spengler, Sylvia J.

    2000-02-01

    There is a well-known story about the blind man examining the elephant: the part of the elephant examined determines his perception of the whole beast. Perhaps bioinformatics--the shotgun marriage between biology and mathematics, computer science, and engineering--is like an elephant that occupies a large chair in the scientific living room. Given the demand for and shortage of researchers with the computer skills to handle large volumes of biological data, where exactly does the bioinformatics elephant sit? There are probably many biologists who feel that a major product of this bioinformatics elephant is large piles of waste material. If you have tried to plow through Web sites and software packages in search of a specific tool for analyzing and collating large amounts of research data, you may well feel the same way. But there has been progress with major initiatives to develop more computing power, educate biologists about computers, increase funding, and set standards. For our purposes, bioinformatics is not simply a biologically inclined rehash of information theory (1) nor is it a hodgepodge of computer science techniques for building, updating, and accessing biological data. Rather bioinformatics incorporates both of these capabilities into a broad interdisciplinary science that involves both conceptual and practical tools for the understanding, generation, processing, and propagation of biological information. As such, bioinformatics is the sine qua non of 21st-century biology. Analyzing gene expression using cDNA microarrays immobilized on slides or other solid supports (gene chips) is set to revolutionize biology and medicine and, in so doing, generate vast quantities of data that have to be accurately interpreted (Fig. 1). As discussed at a meeting a few months ago (Microarray Algorithms and Statistical Analysis: Methods and Standards; Tahoe City, California; 9-12 November 1999), experiments with cDNA arrays must be subjected to quality control

  4. Tools and collaborative environments for bioinformatics research

    PubMed Central

    Giugno, Rosalba; Pulvirenti, Alfredo

    2011-01-01

    Advanced research requires intensive interaction among a multitude of actors, often possessing different expertise and usually working at a distance from each other. The field of collaborative research aims to establish suitable models and technologies to properly support these interactions. In this article, we first present the reasons for an interest of Bioinformatics in this context by also suggesting some research domains that could benefit from collaborative research. We then review the principles and some of the most relevant applications of social networking, with a special attention to networks supporting scientific collaboration, by also highlighting some critical issues, such as identification of users and standardization of formats. We then introduce some systems for collaborative document creation, including wiki systems and tools for ontology development, and review some of the most interesting biological wikis. We also review the principles of Collaborative Development Environments for software and show some examples in Bioinformatics. Finally, we present the principles and some examples of Learning Management Systems. In conclusion, we try to devise some of the goals to be achieved in the short term for the exploitation of these technologies. PMID:21984743

  5. Built to disappear.

    PubMed

    Bauer, Siegfried; Kaltenbrunner, Martin

    2014-06-24

    Microelectronics dominates the technological and commercial landscape of today's electronics industry; ultrahigh density integrated circuits on rigid silicon provide the computing power for smart appliances that help us organize our daily lives. Integrated circuits function flawlessly for decades, yet we like to replace smart phones and tablet computers every year. Disposable electronics, built to disappear in a controlled fashion after the intended lifespan, may be one of the potential applications of transient single-crystalline silicon nanomembranes, reported by Hwang et al. in this issue of ACS Nano. We briefly outline the development of this latest branch of electronics research, and we present some prospects for future developments. Electronics is steadily evolving, and 20 years from now we may find it perfectly normal for smart appliances to be embedded everywhere, on textiles, on our skin, and even in our body. PMID:24892500

  6. Using Bioinformatics Approach to Explore the Pharmacological Mechanisms of Multiple Ingredients in Shuang-Huang-Lian

    PubMed Central

    Zhang, Bai-xia; Li, Jian; Gu, Hao; Li, Qiang; Zhang, Qi; Zhang, Tian-jiao; Wang, Yun; Cai, Cheng-ke

    2015-01-01

    Due to the proved clinical efficacy, Shuang-Huang-Lian (SHL) has developed a variety of dosage forms. However, the in-depth research on targets and pharmacological mechanisms of SHL preparations was scarce. In the presented study, the bioinformatics approaches were adopted to integrate relevant data and biological information. As a result, a PPI network was built and the common topological parameters were characterized. The results suggested that the PPI network of SHL exhibited a scale-free property and modular architecture. The drug target network of SHL was structured with 21 functional modules. According to certain modules and pharmacological effects distribution, an antitumor effect and potential drug targets were predicted. A biological network which contained 26 subnetworks was constructed to elucidate the antipneumonia mechanism of SHL. We also extracted the subnetwork to explicitly display the pathway where one effective component acts on the pneumonia related targets. In conclusions, a bioinformatics approach was established for exploring the drug targets, pharmacological activity distribution, effective components of SHL, and its mechanism of antipneumonia. Above all, we identified the effective components and disclosed the mechanism of SHL from the view of system. PMID:26495421

  7. Using Bioinformatics Approach to Explore the Pharmacological Mechanisms of Multiple Ingredients in Shuang-Huang-Lian.

    PubMed

    Zhang, Bai-xia; Li, Jian; Gu, Hao; Li, Qiang; Zhang, Qi; Zhang, Tian-jiao; Wang, Yun; Cai, Cheng-ke

    2015-01-01

    Due to the proved clinical efficacy, Shuang-Huang-Lian (SHL) has developed a variety of dosage forms. However, the in-depth research on targets and pharmacological mechanisms of SHL preparations was scarce. In the presented study, the bioinformatics approaches were adopted to integrate relevant data and biological information. As a result, a PPI network was built and the common topological parameters were characterized. The results suggested that the PPI network of SHL exhibited a scale-free property and modular architecture. The drug target network of SHL was structured with 21 functional modules. According to certain modules and pharmacological effects distribution, an antitumor effect and potential drug targets were predicted. A biological network which contained 26 subnetworks was constructed to elucidate the antipneumonia mechanism of SHL. We also extracted the subnetwork to explicitly display the pathway where one effective component acts on the pneumonia related targets. In conclusions, a bioinformatics approach was established for exploring the drug targets, pharmacological activity distribution, effective components of SHL, and its mechanism of antipneumonia. Above all, we identified the effective components and disclosed the mechanism of SHL from the view of system. PMID:26495421

  8. Bioinformatic Insights from Metagenomics through Visualization

    SciTech Connect

    Havre, Susan L.; Webb-Robertson, Bobbie-Jo M.; Shah, Anuj; Posse, Christian; Gopalan, Banu; Brockman, Fred J.

    2005-08-10

    Revised abstract: (remove current and replace with this) Cutting-edge biological and bioinformatics research seeks a systems perspective through the analysis of multiple types of high-throughput and other experimental data for the same sample. Systems-level analysis requires the integration and fusion of such data, typically through advanced statistics and mathematics. Visualization is a complementary com-putational approach that supports integration and analysis of complex data or its derivatives. We present a bioinformatics visualization prototype, Juxter, which depicts categorical information derived from or assigned to these diverse data for the purpose of comparing patterns across categorizations. The visualization allows users to easily discern correlated and anomalous patterns in the data. These patterns, which might not be detected automatically by algorithms, may reveal valuable information leading to insight and discovery. We describe the visualization and interaction capabilities and demonstrate its utility in a new field, metagenomics, which combines molecular biology and genetics to identify and characterize genetic material from multi-species microbial samples.

  9. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades. PMID:24793019

  10. Using bioinformatics for drug target identification from the genome.

    PubMed

    Jiang, Zhenran; Zhou, Yanhong

    2005-01-01

    Genomics and proteomics technologies have created a paradigm shift in the drug discovery process, with bioinformatics having a key role in the exploitation of genomic, transcriptomic, and proteomic data to gain insights into the molecular mechanisms that underlie disease and to identify potential drug targets. We discuss the current state of the art for some of the bioinformatic approaches to identifying drug targets, including identifying new members of successful target classes and their functions, predicting disease relevant genes, and constructing gene networks and protein interaction networks. In addition, we introduce drug target discovery using the strategy of systems biology, and discuss some of the data resources for the identification of drug targets. Although bioinformatics tools and resources can be used to identify putative drug targets, validating targets is still a process that requires an understanding of the role of the gene or protein in the disease process and is heavily dependent on laboratory-based work. PMID:16336003

  11. ExPASy: SIB bioinformatics resource portal

    PubMed Central

    Artimo, Panu; Jonnalagedda, Manohar; Arnold, Konstantin; Baratin, Delphine; Csardi, Gabor; de Castro, Edouard; Duvaud, Séverine; Flegel, Volker; Fortier, Arnaud; Gasteiger, Elisabeth; Grosdidier, Aurélien; Hernandez, Céline; Ioannidis, Vassilios; Kuznetsov, Dmitry; Liechti, Robin; Moretti, Sébastien; Mostaguir, Khaled; Redaschi, Nicole; Rossier, Grégoire; Xenarios, Ioannis; Stockinger, Heinz

    2012-01-01

    ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a ‘decentralized’ way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across ‘selected’ resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPASy. PMID:22661580

  12. Bioinformatics in Africa: The Rise of Ghana?

    PubMed Central

    Karikari, Thomas K.

    2015-01-01

    Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics. PMID:26378921

  13. Bioinformatics in Africa: The Rise of Ghana?

    PubMed

    Karikari, Thomas K

    2015-09-01

    Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics. PMID:26378921

  14. Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-12-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  15. Effect of electrode position on azo dye removal in an up-flow hybrid anaerobic digestion reactor with built-in bioelectrochemical system

    PubMed Central

    Cui, Min-Hua; Cui, Dan; Lee, Hyung-Sool; Liang, Bin; Wang, Ai-Jie; Cheng, Hao-Yi

    2016-01-01

    In this study, two modes of hybrid anaerobic digestion (AD) bioreactor with built-in BESs (electrodes installed in liquid phase (R1) and sludge phase (R2)) were tested for identifying the effect of electrodes position on azo dye wastewater treatment. Alizarin yellow R (AYR) was used as a model dye. Decolorization efficiency of R1 was 90.41 ± 6.20% at influent loading rate of 800 g-AYR/ m3·d, which was 39% higher than that of R2. The contribution of bioelectrochemical reduction to AYR decolorization (16.23 ± 1.86% for R1 versus 22.24 ± 2.14% for R2) implied that although azo dye was mainly removed in sludge zone, BES further improved the effluent quality, especially for R1 where electrodes were installed in liquid phase. The microbial communities in the electrode biofilms (dominant by Enterobacter) and sludge (dominant by Enterococcus) were well distinguished in R1, but they were similar in R2. These results suggest that electrodes installed in liquid phase in the anaerobic hybrid system are more efficient than that in sludge phase for azo dye removal, which give great inspirations for the application of AD-BES hybrid process for various refractory wastewaters treatment. PMID:27121278

  16. Effect of electrode position on azo dye removal in an up-flow hybrid anaerobic digestion reactor with built-in bioelectrochemical system.

    PubMed

    Cui, Min-Hua; Cui, Dan; Lee, Hyung-Sool; Liang, Bin; Wang, Ai-Jie; Cheng, Hao-Yi

    2016-01-01

    In this study, two modes of hybrid anaerobic digestion (AD) bioreactor with built-in BESs (electrodes installed in liquid phase (R1) and sludge phase (R2)) were tested for identifying the effect of electrodes position on azo dye wastewater treatment. Alizarin yellow R (AYR) was used as a model dye. Decolorization efficiency of R1 was 90.41 ± 6.20% at influent loading rate of 800 g-AYR/ m(3)·d, which was 39% higher than that of R2. The contribution of bioelectrochemical reduction to AYR decolorization (16.23 ± 1.86% for R1 versus 22.24 ± 2.14% for R2) implied that although azo dye was mainly removed in sludge zone, BES further improved the effluent quality, especially for R1 where electrodes were installed in liquid phase. The microbial communities in the electrode biofilms (dominant by Enterobacter) and sludge (dominant by Enterococcus) were well distinguished in R1, but they were similar in R2. These results suggest that electrodes installed in liquid phase in the anaerobic hybrid system are more efficient than that in sludge phase for azo dye removal, which give great inspirations for the application of AD-BES hybrid process for various refractory wastewaters treatment. PMID:27121278

  17. Optimisation and analysis of microreactor designs for microfluidic gradient generation using a purpose built optical detection system for entire chip imaging.

    PubMed

    Abdulla Yusuf, Hayat; Baldock, Sara J; Barber, Robert W; Fielden, Peter R; Goddard, Nick J; Mohr, Stephan; Treves Brown, Bernard J

    2009-07-01

    This paper presents and fully characterises a novel simplification approach for the development of microsystem based concentration gradient generators with significantly reduced microfluidic networks. Three microreactors are presented; a pair of two-inlet six-outlet (2-6) networks and a two-inlet eleven-outlet (2-11) network design. The mathematical approach has been validated experimentally using a purpose built optical detection system. The experimental results are shown to be in very good agreement with the theoretical predictions from the model. The developed networks are proven to deliver precise linear concentration gradients (R(2) = 0.9973 and 0.9991 for the (2-6) designs) and the simplified networks are shown to provide enhanced performance over conventional designs, overcoming some of the practical issues associated with traditional networks. The optical measurements were precise enough to validate the linearity in each level of the conventional (2-6) networks (R(2) ranged from 0.9999 to 0.9973) compared to R(2) = 1 for the theoretical model. CFD results show that there is an effective upper limit on the operating flow rate. The new simplified (2-11) design was able to maintain a linear outlet profile up to 0.8 microl/s per inlet (R(2) = 0.9992). The proposed approach is widely applicable for the production of linear and arbitrary concentration profiles, with the potential for high throughput applications that span a wide range of chemical and biological studies. PMID:19532963

  18. Nuclear reactors built, being built, or planned 1992

    SciTech Connect

    Not Available

    1993-07-01

    Nuclear Reactors Built, Being Built, or Planned contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1992. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. Information is presented on five parts: Civilian, Production, Military, Export and Critical Assembly.

  19. Genomics and Bioinformatics Resources for Crop Improvement

    PubMed Central

    Mochida, Keiichi; Shinozaki, Kazuo

    2010-01-01

    Recent remarkable innovations in platforms for omics-based research and application development provide crucial resources to promote research in model and applied plant species. A combinatorial approach using multiple omics platforms and integration of their outcomes is now an effective strategy for clarifying molecular systems integral to improving plant productivity. Furthermore, promotion of comparative genomics among model and applied plants allows us to grasp the biological properties of each species and to accelerate gene discovery and functional analyses of genes. Bioinformatics platforms and their associated databases are also essential for the effective design of approaches making the best use of genomic resources, including resource integration. We review recent advances in research platforms and resources in plant omics together with related databases and advances in technology. PMID:20208064

  20. Postgenomics: Proteomics and Bioinformatics in Cancer Research

    PubMed Central

    2003-01-01

    Now that the human genome is completed, the characterization of the proteins encoded by the sequence remains a challenging task. The study of the complete protein complement of the genome, the “proteome,” referred to as proteomics, will be essential if new therapeutic drugs and new disease biomarkers for early diagnosis are to be developed. Research efforts are already underway to develop the technology necessary to compare the specific protein profiles of diseased versus nondiseased states. These technologies provide a wealth of information and rapidly generate large quantities of data. Processing the large amounts of data will lead to useful predictive mathematical descriptions of biological systems which will permit rapid identification of novel therapeutic targets and identification of metabolic disorders. Here, we present an overview of the current status and future research approaches in defining the cancer cell's proteome in combination with different bioinformatics and computational biology tools toward a better understanding of health and disease. PMID:14615629

  1. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Cancer.gov

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  2. Rapid Development of Bioinformatics Education in China

    ERIC Educational Resources Information Center

    Zhong, Yang; Zhang, Xiaoyan; Ma, Jian; Zhang, Liang

    2003-01-01

    As the Human Genome Project experiences remarkable success and a flood of biological data is produced, bioinformatics becomes a very "hot" cross-disciplinary field, yet experienced bioinformaticians are urgently needed worldwide. This paper summarises the rapid development of bioinformatics education in China, especially related undergraduate…

  3. Biology in 'silico': The Bioinformatics Revolution.

    ERIC Educational Resources Information Center

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  4. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  5. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    ERIC Educational Resources Information Center

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR RLK) genetic…

  6. Fuzzy Logic in Medicine and Bioinformatics

    PubMed Central

    Torres, Angela; Nieto, Juan J.

    2006-01-01

    The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions) and in bioinformatics (comparison of genomes). PMID:16883057

  7. Bioinformatics clouds for big data manipulation

    PubMed Central

    2012-01-01

    Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. PMID:23190475

  8. Nuclear reactors built, being built, or planned 1993

    SciTech Connect

    Not Available

    1993-08-01

    Nuclear Reactors Built, Being Built, or Planned contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1993. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: civilian, production, military, export, and critical assembly.

  9. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    SciTech Connect

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    courses or independent research projects requires infrastructure for organizing and assessing student work. Here, we present a new platform for faculty to keep current with the rapidly changing field of bioinformatics, the Integrated Microbial Genomes Annotation Collaboration Toolkit (IMG-ACT). It was developed by instructors from both research-intensive and predominately undergraduate institutions in collaboration with the Department of Energy-Joint Genome Institute (DOE-JGI) as a means to innovate and update undergraduate education and faculty development. The IMG-ACT program provides a cadre of tools, including access to a clearinghouse of genome sequences, bioinformatics databases, data storage, instructor course management, and student notebooks for organizing the results of their bioinformatic investigations. In the process, IMG-ACT makes it feasible to provide undergraduate research opportunities to a greater number and diversity of students, in contrast to the traditional mentor-to-student apprenticeship model for undergraduate research, which can be too expensive and time-consuming to provide for every undergraduate. The IMG-ACT serves as the hub for the network of faculty and students that use the system for microbial genome analysis. Open access of the IMG-ACT infrastructure to participating schools ensures that all types of higher education institutions can utilize it. With the infrastructure in place, faculty can focus their efforts on the pedagogy of bioinformatics, involvement of students in research, and use of this tool for their own research agenda. What the original faculty members of the IMG-ACT development team present here is an overview of how the IMG-ACT program has affected our development in terms of teaching and research with the hopes that it will inspire more faculty to get involved.

  10. Computational Biology and Bioinformatics in Nigeria

    PubMed Central

    Fatumo, Segun A.; Adoga, Moses P.; Ojo, Opeolu O.; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-01-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries. PMID:24763310

  11. Bioinformatic challenges in targeted proteomics.

    PubMed

    Reker, Daniel; Malmström, Lars

    2012-09-01

    Selected reaction monitoring mass spectrometry is an emerging targeted proteomics technology that allows for the investigation of complex protein samples with high sensitivity and efficiency. It requires extensive knowledge about the sample for the many parameters needed to carry out the experiment to be set appropriately. Most studies today rely on parameter estimation from prior studies, public databases, or from measuring synthetic peptides. This is efficient and sound, but in absence of prior data, de novo parameter estimation is necessary. Computational methods can be used to create an automated framework to address this problem. However, the number of available applications is still small. This review aims at giving an orientation on the various bioinformatical challenges. To this end, we state the problems in classical machine learning and data mining terms, give examples of implemented solutions and provide some room for alternatives. This will hopefully lead to an increased momentum for the development of algorithms and serve the needs of the community for computational methods. We note that the combination of such methods in an assisted workflow will ease both the usage of targeted proteomics in experimental studies as well as the further development of computational approaches. PMID:22866949

  12. An approach to regional wetland digital elevation model development using a differential global positioning system and a custom-built helicopter-based surveying system

    USGS Publications Warehouse

    Jones, J.W.; Desmond, G.B.; Henkle, C.; Glover, R.

    2012-01-01

    Accurate topographic data are critical to restoration science and planning for the Everglades region of South Florida, USA. They are needed to monitor and simulate water level, water depth and hydroperiod and are used in scientific research on hydrologic and biologic processes. Because large wetland environments and data acquisition challenge conventional ground-based and remotely sensed data collection methods, the United States Geological Survey (USGS) adapted a classical data collection instrument to global positioning system (GPS) and geographic information system (GIS) technologies. Data acquired with this instrument were processed using geostatistics to yield sub-water level elevation values with centimetre accuracy (??15 cm). The developed database framework, modelling philosophy and metadata protocol allow for continued, collaborative model revision and expansion, given additional elevation or other ancillary data. ?? 2012 Taylor & Francis.

  13. Bioinformatics Visualisation Tools: An Unbalanced Picture.

    PubMed

    Broască, Laura; Ancuşa, Versavia; Ciocârlie, Horia

    2016-01-01

    Visualization tools represent a key element in triggering human creativity while being supported with the analysis power of the machine. This paper analyzes free network visualization tools for bioinformatics, frames them in domain specific requirements and compares them. PMID:27577488

  14. Bioinformatics and the undergraduate curriculum essay.

    PubMed

    Maloney, Mark; Parker, Jeffrey; Leblanc, Mark; Woodard, Craig T; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of bioinformatics as a new discipline has challenged many colleges and universities to keep current with their curricula, often in the face of static or dwindling resources. On the plus side, many bioinformatics modules and related databases and software programs are free and accessible online, and interdisciplinary partnerships between existing faculty members and their support staff have proved advantageous in such efforts. We present examples of strategies and methods that have been successfully used to incorporate bioinformatics content into undergraduate curricula. PMID:20810947

  15. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  16. Bioinformatics in Italy: BITS2011, the Eighth Annual Meeting of the Italian Society of Bioinformatics

    PubMed Central

    2012-01-01

    The BITS2011 meeting, held in Pisa on June 20-22, 2011, brought together more than 120 Italian researchers working in the field of Bioinformatics, as well as students in Bioinformatics, Computational Biology, Biology, Computer Sciences, and Engineering, representing a landscape of Italian bioinformatics research. This preface provides a brief overview of the meeting and introduces the peer-reviewed manuscripts that were accepted for publication in this Supplement. PMID:22536954

  17. Nuclear reactors built, being built, or planned, 1991

    SciTech Connect

    Simpson, B.

    1992-07-01

    This document contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1991. The book is divided into three major sections: Section 1 consists of a reactor locator map and reactor tables; Section 2 includes nuclear reactors that are operating, being built, or planned; and Section 3 includes reactors that have been shut down permanently or dismantled. Sections 2 and 3 contain the following classification of reactors: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is an American company -- working either independently or in cooperation with a foreign company (Part 4, in each section). Critical assembly refers to an assembly of fuel and assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  18. Nuclear reactors built, being built, or planned 1996

    SciTech Connect

    1997-08-01

    This publication contains unclassified information about facilities, built, being built, or planned in the United States for domestic use or export as of December 31, 1996. The Office of Scientific and Technical Information, U.S. Department of Energy, gathers this information annually from Washington headquarters, and field offices of DOE; from the U.S. Nuclear Regulatory Commission (NRC); from the U. S. reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from U.S. and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled.

  19. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  20. Bioinformatics process management: information flow via a computational journal

    PubMed Central

    Feagan, Lance; Rohrer, Justin; Garrett, Alexander; Amthauer, Heather; Komp, Ed; Johnson, David; Hock, Adam; Clark, Terry; Lushington, Gerald; Minden, Gary; Frost, Victor

    2007-01-01

    This paper presents the Bioinformatics Computational Journal (BCJ), a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples. PMID:18053179

  1. A Guide to Bioinformatics for Immunologists

    PubMed Central

    Whelan, Fiona J.; Yap, Nicholas V. L.; Surette, Michael G.; Golding, G. Brian; Bowdish, Dawn M. E.

    2013-01-01

    Bioinformatics includes a suite of methods, which are cheap, approachable, and many of which are easily accessible without any sort of specialized bioinformatic training. Yet, despite this, bioinformatic tools are under-utilized by immunologists. Herein, we review a representative set of publicly available, easy-to-use bioinformatic tools using our own research on an under-annotated human gene, SCARA3, as an example. SCARA3 shares an evolutionary relationship with the class A scavenger receptors, but preliminary research showed that it was divergent enough that its function remained unclear. In our quest for more information about this gene – did it share gene sequence similarities to other scavenger receptors? Did it contain conserved protein domains? Where was it expressed in the human body? – we discovered the power and informative potential of publicly available bioinformatic tools designed for the novice in mind, which allowed us to hypothesize on the regulation, structure, and function of this protein. We argue that these tools are largely applicable to many facets of immunology research. PMID:24363654

  2. Carving a niche: establishing bioinformatics collaborations

    PubMed Central

    Lyon, Jennifer A.; Tennant, Michele R.; Messner, Kevin R.; Osterbur, David L.

    2006-01-01

    Objectives: The paper describes collaborations and partnerships developed between library bioinformatics programs and other bioinformatics-related units at four academic institutions. Methods: A call for information on bioinformatics partnerships was made via email to librarians who have participated in the National Center for Biotechnology Information's Advanced Workshop for Bioinformatics Information Specialists. Librarians from Harvard University, the University of Florida, the University of Minnesota, and Vanderbilt University responded and expressed willingness to contribute information on their institutions, programs, services, and collaborating partners. Similarities and differences in programs and collaborations were identified. Results: The four librarians have developed partnerships with other units on their campuses that can be categorized into the following areas: knowledge management, instruction, and electronic resource support. All primarily support freely accessible electronic resources, while other campus units deal with fee-based ones. These demarcations are apparent in resource provision as well as in subsequent support and instruction. Conclusions and Recommendations: Through environmental scanning and networking with colleagues, librarians who provide bioinformatics support can develop fruitful collaborations. Visibility is key to building collaborations, as is broad-based thinking in terms of potential partners. PMID:16888668

  3. ESF AS-BUILT CONFIGURATION

    SciTech Connect

    NA

    2005-03-17

    The calculations contained in this document were developed by the ''Mining Group of the Design & Engineering Organization'' and are intended solely for the use of the ''Design & Engineering Organization'' in its work regarding the subsurface repository. Yucca Mountain Project personnel from the ''Mining Group'' should be consulted before use of the calculations for purposes other than those stated herein or use by individuals other than authorized personnel in the ''Design & Engineering Organization''. The purpose of this calculation is to provide design inputs that can be used to develop an as-built drawing of the Exploratory Studies Facility (ESF) for the planning and development of the subsurface repository. This document includes subsurface as-built surveys, recommendation to complete as-built surveys, and Management and Operating Contractor (M&O) Subsurface Design Drawings as inputs. This calculation is used to provide data and information for an as-built ESF subsurface drawing and is not used in the development of results or conclusions, therefore all inputs are considered as indirect.

  4. Bioinformatics-enabled identification of the HrpL regulon and type III secretion system effector proteins of Pseudomonas syringae pv. phaseolicola 1448A

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The ability of Pseudomonas syringae pv. phaseolicola to cause halo blight of bean is dependent on its ability to translocate effector proteins into host cells via the Hrp type III secretion system (T3SS). To identity genes encoding type III effectors and other potential virulence factors that are r...

  5. Nuclear reactors built, being built, or planned, 1994

    SciTech Connect

    1995-07-01

    This document contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1994. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; tables of data for reactors operating, being built, or planned; and tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is a US company -- working either independently or in cooperation with a foreign company (Part 4). Critical assembly refers to an assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  6. Nuclear reactors built, being built, or planned: 1995

    SciTech Connect

    1996-08-01

    This report contains unclassified information about facilities built, being built, or planned in the US for domestic use or export as of December 31, 1995. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is a US company--working either independently or in cooperation with a foreign company (Part 4). Critical assembly refers to an assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  7. A Quick Guide for Building a Successful Bioinformatics Community

    PubMed Central

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-01-01

    “Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  8. A quick guide for building a successful bioinformatics community.

    PubMed

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-02-01

    "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  9. Built Environment Wind Turbine Roadmap

    SciTech Connect

    Smith, J.; Forsyth, T.; Sinclair, K.; Oteri, F.

    2012-11-01

    The market currently encourages BWT deployment before the technology is ready for full-scale commercialization. To address this issue, industry stakeholders convened a Rooftop and Built-Environment Wind Turbine Workshop on August 11 - 12, 2010, at the National Wind Technology Center, located at the U.S. Department of Energy’s National Renewable Energy Laboratory in Boulder, Colorado. This report summarizes the workshop.

  10. Response of mollusc assemblages to climate variability and anthropogenic activities: a 4000-year record from a shallow bar-built lagoon system.

    PubMed

    Cerrato, Robert M; Locicero, Philip V; Goodbred, Steven L

    2013-10-01

    With their position at the interface between land and ocean and their fragile nature, lagoons are sensitive to environmental change, and it is reasonable to expect these changes would be recorded in well-preserved taxa such as molluscs. To test this, the 4000-year history of molluscs in Great South Bay, a bar-built lagoon, was reconstructed from 24 vibracores. Using x-radiography to identify shell layers, faunal counts, shell condition, organic content, and sediment type were measured in 325 samples. Sample age was estimated by interpolating 40 radiocarbon dates. K-means cluster analysis identified three molluscan assemblages, corresponding to sand-associated and mud-associated groups, and the third associated with inlet areas. Redundancy and regression tree analyses indicated that significant transitions from the sand-associated to mud-associated assemblage occurred over large portions of the bay about 650 and 294 years bp. The first date corresponds to the transition from the Medieval Warm Period to the Little Ice Age; this change in climate reduced the frequency of strong storms, likely leading to reduced barrier island breaching, greater bay enclosure, and fine-grained sediment accumulation. The second date marks the initiation of clear cutting by European settlers, an activity that would have increased runoff of fine-grained material. The occurrence of the inlet assemblage in the western and eastern ends of the bay is consistent with a history of inlets in these areas, even though prior to Hurricane Sandy in 2012, no inlet was present in the eastern bay in almost 200 years. The mud dominant, Mulinia lateralis, is a bivalve often associated with environmental disturbances. Its increased frequency over the past 300 years suggests that disturbances are more common in the bay than in the past. Management activities maintaining the current barrier island state may be contributing to the sand-mud transition and to the bay's susceptibility to disturbances. PMID

  11. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    ERIC Educational Resources Information Center

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  12. Bioinformatics: A History of Evolution "In Silico"

    ERIC Educational Resources Information Center

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  13. "Extreme Programming" in a Bioinformatics Class

    ERIC Educational Resources Information Center

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP). The…

  14. Implementing bioinformatic workflows within the bioextract server

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  15. Bioinformatics in Undergraduate Education: Practical Examples

    ERIC Educational Resources Information Center

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  16. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    EPA Science Inventory

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  17. 2010 Translational bioinformatics year in review

    PubMed Central

    Miller, Katharine S

    2011-01-01

    A review of 2010 research in translational bioinformatics provides much to marvel at. We have seen notable advances in personal genomics, pharmacogenetics, and sequencing. At the same time, the infrastructure for the field has burgeoned. While acknowledging that, according to researchers, the members of this field tend to be overly optimistic, the authors predict a bright future. PMID:21672905

  18. Navigating the changing learning landscape: perspective from bioinformatics.ca

    PubMed Central

    Ouellette, B. F. Francis

    2013-01-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs. PMID:23515468

  19. Navigating the changing learning landscape: perspective from bioinformatics.ca.

    PubMed

    Brazas, Michelle D; Ouellette, B F Francis

    2013-09-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs. PMID:23515468

  20. A Bioinformatics Reference Model: Towards a Framework for Developing and Organising Bioinformatic Resources

    NASA Astrophysics Data System (ADS)

    Hiew, Hong Liang; Bellgard, Matthew

    2007-11-01

    Life Science research faces the constant challenge of how to effectively handle an ever-growing body of bioinformatics software and online resources. The users and developers of bioinformatics resources have a diverse set of competing demands on how these resources need to be developed and organised. Unfortunately, there does not exist an adequate community-wide framework to integrate such competing demands. The problems that arise from this include unstructured standards development, the emergence of tools that do not meet specific needs of researchers, and often times a communications gap between those who use the tools and those who supply them. This paper presents an overview of the different functions and needs of bioinformatics stakeholders to determine what may be required in a community-wide framework. A Bioinformatics Reference Model is proposed as a basis for such a framework. The reference model outlines the functional relationship between research usage and technical aspects of bioinformatics resources. It separates important functions into multiple structured layers, clarifies how they relate to each other, and highlights the gaps that need to be addressed for progress towards a diverse, manageable, and sustainable body of resources. The relevance of this reference model to the bioscience research community, and its implications in progress for organising our bioinformatics resources, are discussed.

  1. Bioinformatic characterization of plant networks

    SciTech Connect

    McDermott, Jason E.; Samudrala, Ram

    2008-06-30

    Cells and organisms are governed by networks of interactions, genetic, physical and metabolic. Large-scale experimental studies of interactions between components of biological systems have been performed for a variety of eukaryotic organisms. However, there is a dearth of such data for plants. Computational methods for prediction of relationships between proteins, primarily based on comparative genomics, provide a useful systems-level view of cellular functioning and can be used to extend information about other eukaryotes to plants. We have predicted networks for Arabidopsis thaliana, Oryza sativa indica and japonica and several plant pathogens using the Bioverse (http://bioverse.compbio.washington.edu) and show that they are similar to experimentally-derived interaction networks. Predicted interaction networks for plants can be used to provide novel functional annotations and predictions about plant phenotypes and aid in rational engineering of biosynthesis pathways.

  2. Composable languages for bioinformatics: the NYoSh experiment.

    PubMed

    Simi, Manuele; Campagne, Fabien

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  3. Composable languages for bioinformatics: the NYoSh experiment

    PubMed Central

    Simi, Manuele

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  4. Platinum-related deep levels in silicon and their passivation by atomic hydrogen using a home-built automated DLTS system

    NASA Astrophysics Data System (ADS)

    Reddy, B. P. N.; Reddy, P. N.; Pandu Rangaiah, S. V.

    1996-09-01

    An inexpensive automated DLTS system has been developed in modular form consisting of modules such as a capacitance meter, pulse generator, DLTS system timing controller, data acquisition system, PID temperature controller, cryostat with LN2 flow control facility, etc. These modules, except the capacitance meter and pulse generator, have been designed and fabricated in the laboratory. Further they are integrated and interfaced to PC AT/386 computer. Software has been developed to run the spectrometer, collect data and off-line data processing for the deep level parameters such as activation energy, capture cross-section and density. The system has been used to study the deep levels of platinum in n-type silicon and their passivation by atomic hydrogen. The estimated activation energy of the two acceptor levels are Ec-0.280 eV and Ec-0.522 eV and their capture cross sections are 2.2E-15 cm-2 and 4.3E-15 cm-2 respectively. These levels are found to be reactivated when the hydrogenated samples are annealed in the temperature range 350 - 500 degrees Celsius. The mechanism of passivation and reactivation of these levels are discussed.

  5. A toolbox for developing bioinformatics software.

    PubMed

    Rother, Kristian; Potrzebowski, Wojciech; Puton, Tomasz; Rother, Magdalena; Wywial, Ewa; Bujnicki, Janusz M

    2012-03-01

    Creating useful software is a major activity of many scientists, including bioinformaticians. Nevertheless, software development in an academic setting is often unsystematic, which can lead to problems associated with maintenance and long-term availibility. Unfortunately, well-documented software development methodology is difficult to adopt, and technical measures that directly improve bioinformatic programming have not been described comprehensively. We have examined 22 software projects and have identified a set of practices for software development in an academic environment. We found them useful to plan a project, support the involvement of experts (e.g. experimentalists), and to promote higher quality and maintainability of the resulting programs. This article describes 12 techniques that facilitate a quick start into software engineering. We describe 3 of the 22 projects in detail and give many examples to illustrate the usage of particular techniques. We expect this toolbox to be useful for many bioinformatics programming projects and to the training of scientific programmers. PMID:21803787

  6. Novel bioinformatic developments for exome sequencing.

    PubMed

    Lelieveld, Stefan H; Veltman, Joris A; Gilissen, Christian

    2016-06-01

    With the widespread adoption of next generation sequencing technologies by the genetics community and the rapid decrease in costs per base, exome sequencing has become a standard within the repertoire of genetic experiments for both research and diagnostics. Although bioinformatics now offers standard solutions for the analysis of exome sequencing data, many challenges still remain; especially the increasing scale at which exome data are now being generated has given rise to novel challenges in how to efficiently store, analyze and interpret exome data of this magnitude. In this review we discuss some of the recent developments in bioinformatics for exome sequencing and the directions that this is taking us to. With these developments, exome sequencing is paving the way for the next big challenge, the application of whole genome sequencing. PMID:27075447

  7. A toolbox for developing bioinformatics software

    PubMed Central

    Potrzebowski, Wojciech; Puton, Tomasz; Rother, Magdalena; Wywial, Ewa; Bujnicki, Janusz M.

    2012-01-01

    Creating useful software is a major activity of many scientists, including bioinformaticians. Nevertheless, software development in an academic setting is often unsystematic, which can lead to problems associated with maintenance and long-term availibility. Unfortunately, well-documented software development methodology is difficult to adopt, and technical measures that directly improve bioinformatic programming have not been described comprehensively. We have examined 22 software projects and have identified a set of practices for software development in an academic environment. We found them useful to plan a project, support the involvement of experts (e.g. experimentalists), and to promote higher quality and maintainability of the resulting programs. This article describes 12 techniques that facilitate a quick start into software engineering. We describe 3 of the 22 projects in detail and give many examples to illustrate the usage of particular techniques. We expect this toolbox to be useful for many bioinformatics programming projects and to the training of scientific programmers. PMID:21803787

  8. Bioinformatics in New Generation Flavivirus Vaccines

    PubMed Central

    Koraka, Penelope; Martina, Byron E. E.; Osterhaus, Albert D. M. E.

    2010-01-01

    Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed. PMID:20467477

  9. Discovery and Classification of Bioinformatics Web Services

    SciTech Connect

    Rocco, D; Critchlow, T

    2002-09-02

    The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.

  10. Translational bioinformatics applications in genome medicine

    PubMed Central

    2009-01-01

    Although investigators using methodologies in bioinformatics have always been useful in genomic experimentation in analytic, engineering, and infrastructure support roles, only recently have bioinformaticians been able to have a primary scientific role in asking and answering questions on human health and disease. Here, I argue that this shift in role towards asking questions in medicine is now the next step needed for the field of bioinformatics. I outline four reasons why bioinformaticians are newly enabled to drive the questions in primary medical discovery: public availability of data, intersection of data across experiments, commoditization of methods, and streamlined validation. I also list four recommendations for bioinformaticians wishing to get more involved in translational research. PMID:19566916

  11. Development and experimental evaluation of a thermography measurement system for real-time monitoring of comfort and heat rate exchange in the built environment

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Sabbatini, E.; Arnesano, M.

    2012-03-01

    A measurement system based on infrared (IR) thermovision technique (ITT) is developed for real-time estimation of room thermal variations and comfort conditions in office-type environment as a part of a feasibility study in the EU FP7 project ‘INTUBE’. An IR camera installed on the ceiling allows thermal image acquisition and post-processing is performed to derive mean surface temperatures, number of occupants and presence of other heat sources (e.g. computer) through detecting algorithms. A lumped parameter model of the room, developed in the Matlab/Simulink environment, receives as input the information extracted from image processing to compute room exchanged heat rate, air temperature and thermal comfort (PMV). The aim is to provide in real time the room thermal balance and comfort information for energy-saving purposes in an improved way with respect to traditional thermostats. Instantaneous information can be displayed for the users or eventually used for automatic HVAC control. The system is based on custom adaptation of a surveillance low-cost IR system with dedicated radiometric calibration. Experimental results show average absolute discrepancies in the order of 0.4 °C between calculated and measured air temperature during a time period of a day. A sensitivity analysis is performed in order to identify main uncertainty sources.

  12. Quantum Bio-Informatics IV

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori

    2011-01-01

    The QP-DYN algorithms / L. Accardi, M. Regoli and M. Ohya -- Study of transcriptional regulatory network based on Cis module database / S. Akasaka ... [et al.] -- On Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras / H. Ando, I. Ojima and Y. Matsuzawa -- On a general form of time operators of a Hamiltonian with purely discrete spectrum / A. Arai -- Quantum uncertainty and decision-making in game theory / M. Asano ... [et al.] -- New types of quantum entropies and additive information capacities / V. P. Belavkin -- Non-Markovian dynamics of quantum systems / D. Chruscinski and A. Kossakowski -- Self-collapses of quantum systems and brain activities / K.-H. Fichtner ... [et al.] -- Statistical analysis of random number generators / L. Accardi and M. Gabler -- Entangled effects of two consecutive pairs in residues and its use in alignment / T. Ham, K. Sato and M. Ohya -- The passage from digital to analogue in white noise analysis and applications / T. Hida -- Remarks on the degree of entanglement / D. Chruscinski ... [et al.] -- A completely discrete particle model derived from a stochastic partial differential equation by point systems / K.-H. Fichtner, K. Inoue and M. Ohya -- On quantum algorithm for exptime problem / S. Iriyama and M. Ohya -- On sufficient algebraic conditions for identification of quantum states / A. Jamiolkowski -- Concurrence and its estimations by entanglement witnesses / J. Jurkowski -- Classical wave model of quantum-like processing in brain / A. Khrennikov -- Entanglement mapping vs. quantum conditional probability operator / D. Chruscinski ... [et al.] -- Constructing multipartite entanglement witnesses / M. Michalski -- On Kadison-Schwarz property of quantum quadratic operators on M[symbol](C) / F. Mukhamedov and A. Abduganiev -- On phase transitions in quantum Markov chains on Cayley Tree / L. Accardi, F. Mukhamedov and M. Saburov -- Space(-time) emergence as symmetry breaking effect / I. Ojima

  13. Application of Bioinformatics in Chronobiology Research

    PubMed Central

    Lopes, Robson da Silva; Resende, Nathalia Maria; Honorio-França, Adenilda Cristina; França, Eduardo Luzía

    2013-01-01

    Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research. PMID:24187519

  14. Bioinformatics on the cloud computing platform Azure.

    PubMed

    Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  15. Bioinformatics Approach in Plant Genomic Research.

    PubMed

    Ong, Quang; Nguyen, Phuc; Thao, Nguyen Phuong; Le, Ly

    2016-08-01

    The advance in genomics technology leads to the dramatic change in plant biology research. Plant biologists now easily access to enormous genomic data to deeply study plant high-density genetic variation at molecular level. Therefore, fully understanding and well manipulating bioinformatics tools to manage and analyze these data are essential in current plant genome research. Many plant genome databases have been established and continued expanding recently. Meanwhile, analytical methods based on bioinformatics are also well developed in many aspects of plant genomic research including comparative genomic analysis, phylogenomics and evolutionary analysis, and genome-wide association study. However, constantly upgrading in computational infrastructures, such as high capacity data storage and high performing analysis software, is the real challenge for plant genome research. This review paper focuses on challenges and opportunities which knowledge and skills in bioinformatics can bring to plant scientists in present plant genomics era as well as future aspects in critical need for effective tools to facilitate the translation of knowledge from new sequencing data to enhancement of plant productivity. PMID:27499685

  16. Bioinformatics tools for analysing viral genomic data.

    PubMed

    Orton, R J; Gu, Q; Hughes, J; Maabar, M; Modha, S; Vattipally, S B; Wilkie, G S; Davison, A J

    2016-04-01

    The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing. PMID:27217183

  17. Bioinformatics on the Cloud Computing Platform Azure

    PubMed Central

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  18. Shared bioinformatics databases within the Unipro UGENE platform.

    PubMed

    Protsyuk, Ivan V; Grekhov, German A; Tiunov, Alexey V; Fursov, Mikhail Y

    2015-01-01

    Unipro UGENE is an open-source bioinformatics toolkit that integrates popular tools along with original instruments for molecular biologists within a unified user interface. Nowadays, most bioinformatics desktop applications, including UGENE, make use of a local data model while processing different types of data. Such an approach causes an inconvenience for scientists working cooperatively and relying on the same data. This refers to the need of making multiple copies of certain files for every workplace and maintaining synchronization between them in case of modifications. Therefore, we focused on delivering a collaborative work into the UGENE user experience. Currently, several UGENE installations can be connected to a designated shared database and users can interact with it simultaneously. Such databases can be created by UGENE users and be used at their discretion. Objects of each data type, supported by UGENE such as sequences, annotations, multiple alignments, etc., can now be easily imported from or exported to a remote storage. One of the main advantages of this system, compared to existing ones, is the almost simultaneous access of client applications to shared data regardless of their volume. Moreover, the system is capable of storing millions of objects. The storage itself is a regular database server so even an inexpert user is able to deploy it. Thus, UGENE may provide access to shared data for users located, for example, in the same laboratory or institution. UGENE is available at: http://ugene.net/download.html. PMID:26527191

  19. Parallel algorithm research on several important open problems in bioinformatics.

    PubMed

    Niu, Bei-Fang; Lang, Xian-Yu; Lu, Zhong-Hua; Chi, Xue-Bin

    2009-09-01

    High performance computing has opened the door to using bioinformatics and systems biology to explore complex relationships among data, and created the opportunity to tackle very large and involved simulations of biological systems. Many supercomputing centers have jumped on the bandwagon because the opportunities for significant impact in this field is infinite. Development of new algorithms, especially parallel algorithms and software to mine new biological information and to assess different relationships among the members of a large biological data set, is becoming very important. This article presents our work on the design and development of parallel algorithms and software to solve some important open problems arising from bioinformatics, such as structure alignment of RNA sequences, finding new genes, alternative splicing, gene expression clustering and so on. In order to make these parallel software available to a wide audience, the grid computing service interfaces to these software have been deployed in China National Grid (CNGrid). Finally, conclusions and some future research directions are presented. PMID:20640837

  20. Doublet III beamline: as-built

    SciTech Connect

    Harder, C.R.; Holland, M.M.; Parker, J.W.; Gunn, J.; Resnick, L.

    1980-03-01

    In order to fully exploit Doublet III capabilities and to study new plasma physics regimes, a Neutral Beam Injector System has been constructed. Initially, a two beamline system will supply 7 MW of heat to the plasma. The system is currently being expanded to inject approx. 20 MW of power (6 beamlines). Each beamline is equipped with two Lawrence Berkeley Laboratory type rectangular ion sources with 10 cm x 40 cm extraction grids. These sources will accelerate hydrogen ions to 80 keV, with extracted beam currents in excess of 80 A per source expected. The first completed source is currently being tested and conditioned on the High Voltage Test Stand at Lawrence Livermore Laboratory. This paper pictorially reviews the as-built Doublet III neutral beamline with emphasis on component relation and configuration relative to spatial and source imposed design constraints.

  1. Bioinformatics for the synthetic biology of natural products: integrating across the Design-Build-Test cycle.

    PubMed

    Carbonell, Pablo; Currin, Andrew; Jervis, Adrian J; Rattray, Nicholas J W; Swainston, Neil; Yan, Cunyu; Takano, Eriko; Breitling, Rainer

    2016-08-27

    Covering: 2000 to 2016Progress in synthetic biology is enabled by powerful bioinformatics tools allowing the integration of the design, build and test stages of the biological engineering cycle. In this review we illustrate how this integration can be achieved, with a particular focus on natural products discovery and production. Bioinformatics tools for the DESIGN and BUILD stages include tools for the selection, synthesis, assembly and optimization of parts (enzymes and regulatory elements), devices (pathways) and systems (chassis). TEST tools include those for screening, identification and quantification of metabolites for rapid prototyping. The main advantages and limitations of these tools as well as their interoperability capabilities are highlighted. PMID:27185383

  2. Translational Bioinformatics Approaches to Drug Development

    PubMed Central

    Readhead, Ben; Dudley, Joel

    2013-01-01

    Significance A majority of therapeutic interventions occur late in the pathological process, when treatment outcome can be less predictable and effective, highlighting the need for new precise and preventive therapeutic development strategies that consider genomic and environmental context. Translational bioinformatics is well positioned to contribute to the many challenges inherent in bridging this gap between our current reactive methods of healthcare delivery and the intent of precision medicine, particularly in the areas of drug development, which forms the focus of this review. Recent Advances A variety of powerful informatics methods for organizing and leveraging the vast wealth of available molecular measurements available for a broad range of disease contexts have recently emerged. These include methods for data driven disease classification, drug repositioning, identification of disease biomarkers, and the creation of disease network models, each with significant impacts on drug development approaches. Critical Issues An important bottleneck in the application of bioinformatics methods in translational research is the lack of investigators who are versed in both biomedical domains and informatics. Efforts to nurture both sets of competencies within individuals and to increase interfield visibility will help to accelerate the adoption and increased application of bioinformatics in translational research. Future Directions It is possible to construct predictive, multiscale network models of disease by integrating genotype, gene expression, clinical traits, and other multiscale measures using causal network inference methods. This can enable the identification of the “key drivers” of pathology, which may represent novel therapeutic targets or biomarker candidates that play a more direct role in the etiology of disease. PMID:24527359

  3. Microbial bioinformatics for food safety and production

    PubMed Central

    Alkema, Wynand; Boekhorst, Jos; Wels, Michiel

    2016-01-01

    In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput ‘omics’ technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety. PMID:26082168

  4. Critical Issues in Bioinformatics and Computing

    PubMed Central

    Kesh, Someswa; Raghupathi, Wullianallur

    2004-01-01

    This article provides an overview of the field of bioinformatics and its implications for the various participants. Next-generation issues facing developers (programmers), users (molecular biologists), and the general public (patients) who would benefit from the potential applications are identified. The goal is to create awareness and debate on the opportunities (such as career paths) and the challenges such as privacy that arise. A triad model of the participants' roles and responsibilities is presented along with the identification of the challenges and possible solutions. PMID:18066389

  5. Translational Bioinformatics: Past, Present, and Future

    PubMed Central

    Tenenbaum, Jessica D.

    2016-01-01

    Though a relatively young discipline, translational bioinformatics (TBI) has become a key component of biomedical research in the era of precision medicine. Development of high-throughput technologies and electronic health records has caused a paradigm shift in both healthcare and biomedical research. Novel tools and methods are required to convert increasingly voluminous datasets into information and actionable knowledge. This review provides a definition and contextualization of the term TBI, describes the discipline’s brief history and past accomplishments, as well as current foci, and concludes with predictions of future directions in the field. PMID:26876718

  6. Bioinformatics in proteomics: application, terminology, and pitfalls.

    PubMed

    Wiemer, Jan C; Prokudin, Alexander

    2004-01-01

    Bioinformatics applies data mining, i.e., modern computer-based statistics, to biomedical data. It leverages on machine learning approaches, such as artificial neural networks, decision trees and clustering algorithms, and is ideally suited for handling huge data amounts. In this article, we review the analysis of mass spectrometry data in proteomics, starting with common pre-processing steps and using single decision trees and decision tree ensembles for classification. Special emphasis is put on the pitfall of overfitting, i.e., of generating too complex single decision trees. Finally, we discuss the pros and cons of the two different decision tree usages. PMID:15237926

  7. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    ERIC Educational Resources Information Center

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  8. OpenHelix: bioinformatics education outside of a different box.

    PubMed

    Williams, Jennifer M; Mangan, Mary E; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C

    2010-11-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review. PMID:20798181

  9. OpenHelix: bioinformatics education outside of a different box

    PubMed Central

    Mangan, Mary E.; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C.

    2010-01-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review. PMID:20798181

  10. Translational Bioinformatics: Linking the Molecular World to the Clinical World

    PubMed Central

    Altman, RB

    2014-01-01

    Translational bioinformatics represents the union of translational medicine and bioinformatics. Translational medicine moves basic biological discoveries from the research bench into the patient-care setting and uses clinical observations to inform basic biology. It focuses on patient care, including the creation of new diagnostics, prognostics, prevention strategies, and therapies based on biological discoveries. Bioinformatics involves algorithms to represent, store, and analyze basic biological data, including DNA sequence, RNA expression, and protein and small-molecule abundance within cells. Translational bioinformatics spans these two fields; it involves the development of algorithms to analyze basic molecular and cellular data with an explicit goal of affecting clinical care. PMID:22549287