Sample records for sample preparation workflows

  1. Small RNA Library Preparation Method for Next-Generation Sequencing Using Chemical Modifications to Prevent Adapter Dimer Formation.

    PubMed

    Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I

    2016-01-01

    For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.

  2. Rapid Assessment of Contaminants and Interferences in Mass Spectrometry Data Using Skyline

    NASA Astrophysics Data System (ADS)

    Rardin, Matthew J.

    2018-04-01

    Proper sample preparation in proteomic workflows is essential to the success of modern mass spectrometry experiments. Complex workflows often require reagents which are incompatible with MS analysis (e.g., detergents) necessitating a variety of sample cleanup procedures. Efforts to understand and mitigate sample contamination are a continual source of disruption with respect to both time and resources. To improve the ability to rapidly assess sample contamination from a diverse array of sources, I developed a molecular library in Skyline for rapid extraction of contaminant precursor signals using MS1 filtering. This contaminant template library is easily managed and can be modified for a diverse array of mass spectrometry sample preparation workflows. Utilization of this template allows rapid assessment of sample integrity and indicates potential sources of contamination. [Figure not available: see fulltext.

  3. An efficient field and laboratory workflow for plant phylotranscriptomic projects1

    PubMed Central

    Yang, Ya; Moore, Michael J.; Brockington, Samuel F.; Timoneda, Alfonso; Feng, Tao; Marx, Hannah E.; Walker, Joseph F.; Smith, Stephen A.

    2017-01-01

    Premise of the study: We describe a field and laboratory workflow developed for plant phylotranscriptomic projects that involves cryogenic tissue collection in the field, RNA extraction and quality control, and library preparation. We also make recommendations for sample curation. Methods and Results: A total of 216 frozen tissue samples of Caryophyllales and other angiosperm taxa were collected from the field or botanical gardens. RNA was extracted, stranded mRNA libraries were prepared, and libraries were sequenced on Illumina HiSeq platforms. These included difficult mucilaginous tissues such as those of Cactaceae and Droseraceae. Conclusions: Our workflow is not only cost effective (ca. $270 per sample, as of August 2016, from tissue to reads) and time efficient (less than 50 h for 10–12 samples including all laboratory work and sample curation), but also has proven robust for extraction of difficult samples such as tissues containing high levels of secondary compounds. PMID:28337391

  4. TruSeq Stranded mRNA and Total RNA Sample Preparation Kits

    Cancer.gov

    Total RNA-Seq enabled by ribosomal RNA (rRNA) reduction is compatible with formalin-fixed paraffin embedded (FFPE) samples, which contain potentially critical biological information. The family of TruSeq Stranded Total RNA sample preparation kits provides a unique combination of unmatched data quality for both mRNA and whole-transcriptome analyses, robust interrogation of both standard and low-quality samples and workflows compatible with a wide range of study designs.

  5. RNA-Seq workflow: gene-level exploratory analysis and differential expression

    PubMed Central

    Love, Michael I.; Anders, Simon; Kim, Vladislav; Huber, Wolfgang

    2015-01-01

    Here we walk through an end-to-end gene-level RNA-Seq differential expression workflow using Bioconductor packages. We will start from the FASTQ files, show how these were aligned to the reference genome, and prepare a count matrix which tallies the number of RNA-seq reads/fragments within each gene for each sample. We will perform exploratory data analysis (EDA) for quality assessment and to explore the relationship between samples, perform differential gene expression analysis, and visually explore the results. PMID:26674615

  6. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    PubMed

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  7. Unattended reaction monitoring using an automated microfluidic sampler and on-line liquid chromatography.

    PubMed

    Patel, Darshan C; Lyu, Yaqi Fara; Gandarilla, Jorge; Doherty, Steve

    2018-04-03

    In-process sampling and analysis is an important aspect of monitoring kinetic profiles and impurity formation or rejection, both in development and during commercial manufacturing. In pharmaceutical process development, the technology of choice for a substantial portion of this analysis is high-performance liquid chromatography (HPLC). Traditionally, the sample extraction and preparation for reaction characterization have been performed manually. This can be time consuming, laborious, and impractical for long processes. Depending on the complexity of the sample preparation, there can be variability introduced by different analysts, and in some cases, the integrity of the sample can be compromised during handling. While there are commercial instruments available for on-line monitoring with HPLC, they lack capabilities in many key areas. Some do not provide integration of the sampling and analysis, while others afford limited flexibility in sample preparation. The current offerings provide a limited number of unit operations available for sample processing and no option for workflow customizability. This work describes development of a microfluidic automated program (MAP) which fully automates the sample extraction, manipulation, and on-line LC analysis. The flexible system is controlled using an intuitive Microsoft Excel based user interface. The autonomous system is capable of unattended reaction monitoring that allows flexible unit operations and workflow customization to enable complex operations and on-line sample preparation. The automated system is shown to offer advantages over manual approaches in key areas while providing consistent and reproducible in-process data. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. DEWEY: the DICOM-enabled workflow engine system.

    PubMed

    Erickson, Bradley J; Langer, Steve G; Blezek, Daniel J; Ryan, William J; French, Todd L

    2014-06-01

    Workflow is a widely used term to describe the sequence of steps to accomplish a task. The use of workflow technology in medicine and medical imaging in particular is limited. In this article, we describe the application of a workflow engine to improve workflow in a radiology department. We implemented a DICOM-enabled workflow engine system in our department. We designed it in a way to allow for scalability, reliability, and flexibility. We implemented several workflows, including one that replaced an existing manual workflow and measured the number of examinations prepared in time without and with the workflow system. The system significantly increased the number of examinations prepared in time for clinical review compared to human effort. It also met the design goals defined at its outset. Workflow engines appear to have value as ways to efficiently assure that complex workflows are completed in a timely fashion.

  9. Semiautomated Sample Preparation for Protein Stability and Formulation Screening via Buffer Exchange.

    PubMed

    Ying, William; Levons, Jaquan K; Carney, Andrea; Gandhi, Rajesh; Vydra, Vicky; Rubin, A Erik

    2016-06-01

    A novel semiautomated buffer exchange process workflow was developed to enable efficient early protein formulation screening. An antibody fragment protein, BMSdab, was used to demonstrate the workflow. The process afforded 60% to 80% cycle time and scientist time savings and significant material efficiencies. These efficiencies ultimately facilitated execution of this stability work earlier in the drug development process, allowing this tool to inform the developability of potential candidates for development from a formulation perspective. To overcome the key technical challenges, the protein solution was buffer-exchanged by centrifuge filtration into formulations for stability screening in a 96-well plate with an ultrafiltration membrane, leveraging automated liquid handling and acoustic volume measurements to allow several cycles of exchanges. The formulations were transferred into a vacuum manifold and sterile filtered into a rack holding 96 glass vials. The vials were sealed with a capmat of individual caps and placed in stability stations. Stability of the samples prepared by this process and by the standard process was demonstrated to be comparable. This process enabled screening a number of formulations of a protein at an early pharmaceutical development stage with a short sample preparation time. © 2015 Society for Laboratory Automation and Screening.

  10. A workflow for multiclass determination of 256 pesticides in essential oils by liquid chromatography tandem mass spectrometry using evaporation and dilution approaches: Application to lavandin, lemon and cypress essential oils.

    PubMed

    Fillatre, Yoann; Rondeau, David; Daguin, Antoine; Communal, Pierre-Yves

    2016-01-01

    This paper describes the determination of 256 multiclass pesticides in cypress and lemon essential oils (EOs) by the way of liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI/MS/MS) analysis using the scheduled selected reaction monitoring mode (sSRM) available on a hybrid quadrupole linear ion trap (QLIT) mass spectrometer. The performance of a sample preparation of lemon and cypress EOs based on dilution or evaporation under nitrogen assisted by a controlled heating were assessed. The best limits of quantification (LOQs) were achieved with the evaporation under nitrogen method giving LOQs≤10µgL(-1) for 91% of the pesticides. In addition the very satisfactory results obtained for recovery, repeatability and linearity showed that for EOs of relatively low evaporation temperature, a sample preparation based on evaporation under nitrogen is well adapted and preferable to dilution. By compiling these results with those previously published by some of us on lavandin EO, we proposed a workflow dedicated to multiresidue determination of pesticides in various EOs by LC-ESI/sSRM. Among the steps involved in this workflow, the protocol related to mass spectrometry proposes an alternative confirmation method to the classical SRM ratio criteria based on a sSRM survey scan followed by an information-dependent acquisition using the sensitive enhanced product ion (EPI) scan to generate MS/MS spectra then compared to a reference. The submitted workflow was applied to the case of lemon EOs samples highlighting for the first time the simultaneous detection of 20 multiclass pesticides in one EO. Some pesticides showed very high concentration levels with amounts greatly exceeding the mgL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Differential gene expression in the siphonophore Nanomia bijuga (Cnidaria) assessed with multiple next-generation sequencing workflows.

    PubMed

    Siebert, Stefan; Robinson, Mark D; Tintori, Sophia C; Goetz, Freya; Helm, Rebecca R; Smith, Stephen A; Shaner, Nathan; Haddock, Steven H D; Dunn, Casey W

    2011-01-01

    We investigated differential gene expression between functionally specialized feeding polyps and swimming medusae in the siphonophore Nanomia bijuga (Cnidaria) with a hybrid long-read/short-read sequencing strategy. We assembled a set of partial gene reference sequences from long-read data (Roche 454), and generated short-read sequences from replicated tissue samples that were mapped to the references to quantify expression. We collected and compared expression data with three short-read expression workflows that differ in sample preparation, sequencing technology, and mapping tools. These workflows were Illumina mRNA-Seq, which generates sequence reads from random locations along each transcript, and two tag-based approaches, SOLiD SAGE and Helicos DGE, which generate reads from particular tag sites. Differences in expression results across workflows were mostly due to the differential impact of missing data in the partial reference sequences. When all 454-derived gene reference sequences were considered, Illumina mRNA-Seq detected more than twice as many differentially expressed (DE) reference sequences as the tag-based workflows. This discrepancy was largely due to missing tag sites in the partial reference that led to false negatives in the tag-based workflows. When only the subset of reference sequences that unambiguously have tag sites was considered, we found broad congruence across workflows, and they all identified a similar set of DE sequences. Our results are promising in several regards for gene expression studies in non-model organisms. First, we demonstrate that a hybrid long-read/short-read sequencing strategy is an effective way to collect gene expression data when an annotated genome sequence is not available. Second, our replicated sampling indicates that expression profiles are highly consistent across field-collected animals in this case. Third, the impacts of partial reference sequences on the ability to detect DE can be mitigated through workflow choice and deeper reference sequencing.

  12. Differential Gene Expression in the Siphonophore Nanomia bijuga (Cnidaria) Assessed with Multiple Next-Generation Sequencing Workflows

    PubMed Central

    Siebert, Stefan; Robinson, Mark D.; Tintori, Sophia C.; Goetz, Freya; Helm, Rebecca R.; Smith, Stephen A.; Shaner, Nathan; Haddock, Steven H. D.; Dunn, Casey W.

    2011-01-01

    We investigated differential gene expression between functionally specialized feeding polyps and swimming medusae in the siphonophore Nanomia bijuga (Cnidaria) with a hybrid long-read/short-read sequencing strategy. We assembled a set of partial gene reference sequences from long-read data (Roche 454), and generated short-read sequences from replicated tissue samples that were mapped to the references to quantify expression. We collected and compared expression data with three short-read expression workflows that differ in sample preparation, sequencing technology, and mapping tools. These workflows were Illumina mRNA-Seq, which generates sequence reads from random locations along each transcript, and two tag-based approaches, SOLiD SAGE and Helicos DGE, which generate reads from particular tag sites. Differences in expression results across workflows were mostly due to the differential impact of missing data in the partial reference sequences. When all 454-derived gene reference sequences were considered, Illumina mRNA-Seq detected more than twice as many differentially expressed (DE) reference sequences as the tag-based workflows. This discrepancy was largely due to missing tag sites in the partial reference that led to false negatives in the tag-based workflows. When only the subset of reference sequences that unambiguously have tag sites was considered, we found broad congruence across workflows, and they all identified a similar set of DE sequences. Our results are promising in several regards for gene expression studies in non-model organisms. First, we demonstrate that a hybrid long-read/short-read sequencing strategy is an effective way to collect gene expression data when an annotated genome sequence is not available. Second, our replicated sampling indicates that expression profiles are highly consistent across field-collected animals in this case. Third, the impacts of partial reference sequences on the ability to detect DE can be mitigated through workflow choice and deeper reference sequencing. PMID:21829563

  13. High-throughput automated microfluidic sample preparation for accurate microbial genomics

    PubMed Central

    Kim, Soohong; De Jonghe, Joachim; Kulesa, Anthony B.; Feldman, David; Vatanen, Tommi; Bhattacharyya, Roby P.; Berdy, Brittany; Gomez, James; Nolan, Jill; Epstein, Slava; Blainey, Paul C.

    2017-01-01

    Low-cost shotgun DNA sequencing is transforming the microbial sciences. Sequencing instruments are so effective that sample preparation is now the key limiting factor. Here, we introduce a microfluidic sample preparation platform that integrates the key steps in cells to sequence library sample preparation for up to 96 samples and reduces DNA input requirements 100-fold while maintaining or improving data quality. The general-purpose microarchitecture we demonstrate supports workflows with arbitrary numbers of reaction and clean-up or capture steps. By reducing the sample quantity requirements, we enabled low-input (∼10,000 cells) whole-genome shotgun (WGS) sequencing of Mycobacterium tuberculosis and soil micro-colonies with superior results. We also leveraged the enhanced throughput to sequence ∼400 clinical Pseudomonas aeruginosa libraries and demonstrate excellent single-nucleotide polymorphism detection performance that explained phenotypically observed antibiotic resistance. Fully-integrated lab-on-chip sample preparation overcomes technical barriers to enable broader deployment of genomics across many basic research and translational applications. PMID:28128213

  14. MARS: bringing the automation of small-molecule bioanalytical sample preparations to a new frontier.

    PubMed

    Li, Ming; Chou, Judy; Jing, Jing; Xu, Hui; Costa, Aldo; Caputo, Robin; Mikkilineni, Rajesh; Flannelly-King, Shane; Rohde, Ellen; Gan, Lawrence; Klunk, Lewis; Yang, Liyu

    2012-06-01

    In recent years, there has been a growing interest in automating small-molecule bioanalytical sample preparations specifically using the Hamilton MicroLab(®) STAR liquid-handling platform. In the most extensive work reported thus far, multiple small-molecule sample preparation assay types (protein precipitation extraction, SPE and liquid-liquid extraction) have been integrated into a suite that is composed of graphical user interfaces and Hamilton scripts. Using that suite, bioanalytical scientists have been able to automate various sample preparation methods to a great extent. However, there are still areas that could benefit from further automation, specifically, the full integration of analytical standard and QC sample preparation with study sample extraction in one continuous run, real-time 2D barcode scanning on the Hamilton deck and direct Laboratory Information Management System database connectivity. We developed a new small-molecule sample-preparation automation system that improves in all of the aforementioned areas. The improved system presented herein further streamlines the bioanalytical workflow, simplifies batch run design, reduces analyst intervention and eliminates sample-handling error.

  15. Flexible automated approach for quantitative liquid handling of complex biological samples.

    PubMed

    Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H

    2007-11-01

    A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.

  16. New on-line separation workflow of microbial metabolites via hyphenation of analytical and preparative comprehensive two-dimensional liquid chromatography.

    PubMed

    Yan, Xia; Wang, Li-Juan; Wu, Zhen; Wu, Yun-Long; Liu, Xiu-Xiu; Chang, Fang-Rong; Fang, Mei-Juan; Qiu, Ying-Kun

    2016-10-15

    Microbial metabolites represent an important source of bioactive natural products, but always exhibit diverse of chemical structures or complicated chemical composition with low active ingredients content. Traditional separation methods rely mainly on off-line combination of open-column chromatography and preparative high performance liquid chromatography (HPLC). However, the multi-step and prolonged separation procedure might lead to exposure to oxygen and structural transformation of metabolites. In the present work, a new two-dimensional separation workflow for fast isolation and analysis of microbial metabolites from Chaetomium globosum SNSHI-5, a cytotoxic fungus derived from extreme environment. The advantage of this analytical comprehensive two-dimensional liquid chromatography (2D-LC) lies on its ability to analyze the composition of the metabolites, and to optimize the separation conditions for the preparative 2D-LC. Furthermore, gram scale preparative 2D-LC separation of the crude fungus extract could be performed on a medium-pressure liquid chromatograph×preparative high-performance liquid chromatography system, under the optimized condition. Interestingly, 12 cytochalasan derivatives, including two new compounds named cytoglobosin Ab (3) and isochaetoglobosin Db (8), were successfully obtained with high purity in a short period of time. The structures of the isolated metabolites were comprehensively characterized by HR ESI-MS and NMR. To be highlighted, this is the first report on the combination of analytical and preparative 2D-LC for the separation of microbial metabolites. The new workflow exhibited apparent advantages in separation efficiency and sample treatment capacity compared with conventional methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Standardized protocols for quality control of MRM-based plasma proteomic workflows.

    PubMed

    Percy, Andrew J; Chambers, Andrew G; Smith, Derek S; Borchers, Christoph H

    2013-01-04

    Mass spectrometry (MS)-based proteomics is rapidly emerging as a viable technology for the identification and quantitation of biological samples, such as human plasma--the most complex yet commonly employed biofluid in clinical analyses. The transition from a qualitative to quantitative science is required if proteomics is going to successfully make the transition to a clinically useful technique. MS, however, has been criticized for a lack of reproducibility and interlaboratory transferability. Currently, the MS and plasma proteomics communities lack standardized protocols and reagents to ensure that high-quality quantitative data can be accurately and precisely reproduced by laboratories across the world using different MS technologies. Toward addressing this issue, we have developed standard protocols for multiple reaction monitoring (MRM)-based assays with customized isotopically labeled internal standards for quality control of the sample preparation workflow and the MS platform in quantitative plasma proteomic analyses. The development of reference standards and their application to a single MS platform is discussed herein, along with the results from intralaboratory tests. The tests highlighted the importance of the reference standards in assessing the efficiency and reproducibility of the entire bottom-up proteomic workflow and revealed errors related to the sample preparation and performance quality and deficits of the MS and LC systems. Such evaluations are necessary if MRM-based quantitative plasma proteomics is to be used in verifying and validating putative disease biomarkers across different research laboratories and eventually in clinical laboratories.

  18. Validation of Methods to Assess the Immunoglobulin Gene Repertoire in Tissues Obtained from Mice on the International Space Station.

    PubMed

    Rettig, Trisha A; Ward, Claire; Pecaut, Michael J; Chapes, Stephen K

    2017-07-01

    Spaceflight is known to affect immune cell populations. In particular, splenic B cell numbers decrease during spaceflight and in ground-based physiological models. Although antibody isotype changes have been assessed during and after space flight, an extensive characterization of the impact of spaceflight on antibody composition has not been conducted in mice. Next Generation Sequencing and bioinformatic tools are now available to assess antibody repertoires. We can now identify immunoglobulin gene- segment usage, junctional regions, and modifications that contribute to specificity and diversity. Due to limitations on the International Space Station, alternate sample collection and storage methods must be employed. Our group compared Illumina MiSeq sequencing data from multiple sample preparation methods in normal C57Bl/6J mice to validate that sample preparation and storage would not bias the outcome of antibody repertoire characterization. In this report, we also compared sequencing techniques and a bioinformatic workflow on the data output when we assessed the IgH and Igκ variable gene usage. This included assessments of our bioinformatic workflow on Illumina HiSeq and MiSeq datasets and is specifically designed to reduce bias, capture the most information from Ig sequences, and produce a data set that provides other data mining options. We validated our workflow by comparing our normal mouse MiSeq data to existing murine antibody repertoire studies validating it for future antibody repertoire studies.

  19. Innovations in Medication Preparation Safety and Wastage Reduction: Use of a Workflow Management System in a Pediatric Hospital.

    PubMed

    Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara

    2017-01-01

    Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.

  20. Assessment of Sample Preparation Bias in Mass Spectrometry-Based Proteomics.

    PubMed

    Klont, Frank; Bras, Linda; Wolters, Justina C; Ongay, Sara; Bischoff, Rainer; Halmos, Gyorgy B; Horvatovich, Péter

    2018-04-17

    For mass spectrometry-based proteomics, the selected sample preparation strategy is a key determinant for information that will be obtained. However, the corresponding selection is often not based on a fit-for-purpose evaluation. Here we report a comparison of in-gel (IGD), in-solution (ISD), on-filter (OFD), and on-pellet digestion (OPD) workflows on the basis of targeted (QconCAT-multiple reaction monitoring (MRM) method for mitochondrial proteins) and discovery proteomics (data-dependent acquisition, DDA) analyses using three different human head and neck tissues (i.e., nasal polyps, parotid gland, and palatine tonsils). Our study reveals differences between the sample preparation methods, for example, with respect to protein and peptide losses, quantification variability, protocol-induced methionine oxidation, and asparagine/glutamine deamidation as well as identification of cysteine-containing peptides. However, none of the methods performed best for all types of tissues, which argues against the existence of a universal sample preparation method for proteome analysis.

  1. MALDI (matrix assisted laser desorption ionization) Imaging Mass Spectrometry (IMS) of skin: Aspects of sample preparation.

    PubMed

    de Macedo, Cristiana Santos; Anderson, David M; Schey, Kevin L

    2017-11-01

    MALDI (matrix assisted laser desorption ionization) Imaging Mass Spectrometry (IMS) allows molecular analysis of biological materials making possible the identification and localization of molecules in tissues, and has been applied to address many questions on skin pathophysiology, as well as on studies about drug absorption and metabolism. Sample preparation for MALDI IMS is the most important part of the workflow, comprising specimen collection and preservation, tissue embedding, cryosectioning, washing, and matrix application. These steps must be carefully optimized for specific analytes of interest (lipids, proteins, drugs, etc.), representing a challenge for skin analysis. In this review, critical parameters for MALDI IMS sample preparation of skin samples will be described. In addition, specific applications of MALDI IMS of skin samples will be presented including wound healing, neoplasia, and infection. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Adaptation of Laser Microdissection Technique for the Study of a Spontaneous Metastatic Mammary Carcinoma Mouse Model by NanoString Technologies

    PubMed Central

    Saylor, Karen L.; Anver, Miriam R.; Salomon, David S.; Golubeva, Yelena G.

    2016-01-01

    Laser capture microdissection (LCM) of tissue is an established tool in medical research for collection of distinguished cell populations under direct microscopic visualization for molecular analysis. LCM samples have been successfully analyzed in a number of genomic and proteomic downstream molecular applications. However, LCM sample collection and preparation procedure has to be adapted to each downstream analysis platform. In this present manuscript we describe in detail the adaptation of LCM methodology for the collection and preparation of fresh frozen samples for NanoString analysis based on a study of a model of mouse mammary gland carcinoma and its lung metastasis. Our adaptation of LCM sample preparation and workflow to the requirements of the NanoString platform allowed acquiring samples with high RNA quality. The NanoString analysis of such samples provided sensitive detection of genes of interest and their associated molecular pathways. NanoString is a reliable gene expression analysis platform that can be effectively coupled with LCM. PMID:27077656

  3. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    PubMed Central

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  4. The LabTube - a novel microfluidic platform for assay automation in laboratory centrifuges.

    PubMed

    Kloke, A; Fiebach, A R; Zhang, S; Drechsel, L; Niekrawietz, S; Hoehl, M M; Kneusel, R; Panthel, K; Steigert, J; von Stetten, F; Zengerle, R; Paust, N

    2014-05-07

    Assay automation is the key for successful transformation of modern biotechnology into routine workflows. Yet, it requires considerable investment in processing devices and auxiliary infrastructure, which is not cost-efficient for laboratories with low or medium sample throughput or point-of-care testing. To close this gap, we present the LabTube platform, which is based on assay specific disposable cartridges for processing in laboratory centrifuges. LabTube cartridges comprise interfaces for sample loading and downstream applications and fluidic unit operations for release of prestored reagents, mixing, and solid phase extraction. Process control is achieved by a centrifugally-actuated ballpen mechanism. To demonstrate the workflow and functionality of the LabTube platform, we show two LabTube automated sample preparation assays from laboratory routines: DNA extractions from whole blood and purification of His-tagged proteins. Equal DNA and protein yields were observed compared to manual reference runs, while LabTube automation could significantly reduce the hands-on-time to one minute per extraction.

  5. Targeted Selected Reaction Monitoring Mass Spectrometric Immunoassay for Insulin-like Growth Factor 1

    PubMed Central

    Niederkofler, Eric E.; Phillips, David A.; Krastins, Bryan; Kulasingam, Vathany; Kiernan, Urban A.; Tubbs, Kemmons A.; Peterman, Scott M.; Prakash, Amol; Diamandis, Eleftherios P.; Lopez, Mary F.; Nedelkov, Dobrin

    2013-01-01

    Insulin-like growth factor 1 (IGF1) is an important biomarker of human growth disorders that is routinely analyzed in clinical laboratories. Mass spectrometry-based workflows offer a viable alternative to standard IGF1 immunoassays, which utilize various pre-analytical preparation strategies. In this work we developed an assay that incorporates a novel sample preparation method for dissociating IGF1 from its binding proteins. The workflow also includes an immunoaffinity step using antibody-derivatized pipette tips, followed by elution, trypsin digestion, and LC-MS/MS separation and detection of the signature peptides in a selected reaction monitoring (SRM) mode. The resulting quantitative mass spectrometric immunoassay (MSIA) exhibited good linearity in the range of 1 to 1,500 ng/mL IGF1, intra- and inter-assay precision with CVs of less than 10%, and lowest limits of detection of 1 ng/mL. The linearity and recovery characteristics of the assay were also established, and the new method compared to a commercially available immunoassay using a large cohort of human serum samples. The IGF1 SRM MSIA is well suited for use in clinical laboratories. PMID:24278387

  6. Clinical utility of an automated instrument for gram staining single slides.

    PubMed

    Baron, Ellen Jo; Mix, Samantha; Moradi, Wais

    2010-06-01

    Gram stains of 87 different clinical samples were prepared by the laboratory's conventional methods (automated or manual) and by a new single-slide-type automated staining instrument, GG&B AGS-1000. Gram stains from either heat- or methanol-fixed slides stained with the new instrument were easy to interpret, and results were essentially the same as those from the methanol-fixed slides prepared as a part of the routine workflow. This instrument is well suited to a rapid-response laboratory where Gram stain requests are commonly received on a stat basis.

  7. Temporal bone bank: complying with European Union directives on human tissue and cells.

    PubMed

    Van Rompaey, Vincent; Vandamme, Wouter; Muylle, Ludo; Van de Heyning, Paul H

    2012-06-01

    Availability of allograft tympano-ossicular systems (ATOS) provides unique reconstructive capabilities, allowing more radical removal of middle ear pathology. To provide ATOS, the University of Antwerp Temporal Bone Bank (UATB) was established in 1988. ATOS use was stopped in many countries because of safety issues concerning human tissue transplantation. Our objective was to maintain an ATOS tissue bank complying with European Union (EU) directives on human tissues and cells. The guidelines of the Belgian Superior Health Council, including EU directive requirements, were rigorously applied to UATB infrastructure, workflow protocols and activity. Workflow protocols were updated and an internal audit was performed to check and improve consistency with established quality systems and changing legislations. The Belgian Federal Agency of Medicines and Health Products performed an inspection to examine compliance with national legislatives and EU directives on human tissues and cells. A sample of important procedures was meticulously examined in its workflow setting next to assessment of the infrastructure and personnel. Results are reported on infrastructure, personnel, administrative workflow, procurement, preparation, processing, distribution, internal audit and inspection by the competent authority. Donors procured: 2006, 93 (45.1%); 2007, 64 (20.6%); 2008, 56 (13.1%); 2009, 79 (6.9%). The UATB was approved by the Minister of Health without critical or important shortcomings. The Ministry accords registration each time for 2 years. An ATOS tissue bank complying with EU regulations on human allografts is feasible and critical to assure that the patient receives tissue, which is safe, individually checked and prepared in a suitable environment.

  8. Workflow and maintenance characteristics of five automated laboratory instruments for the diagnosis of sexually transmitted infections.

    PubMed

    Ratnam, Sam; Jang, Dan; Gilchrist, Jodi; Smieja, Marek; Poirier, Andre; Hatchette, Todd; Flandin, Jean-Frederic; Chernesky, Max

    2014-07-01

    The choice of a suitable automated system for a diagnostic laboratory depends on various factors. Comparative workflow studies provide quantifiable and objective metrics to determine hands-on time during specimen handling and processing, reagent preparation, return visits and maintenance, and test turnaround time and throughput. Using objective time study techniques, workflow characteristics for processing 96 and 192 tests were determined on m2000 RealTime (Abbott Molecular), Viper XTR (Becton Dickinson), cobas 4800 (Roche Molecular Diagnostics), Tigris (Hologic Gen-Probe), and Panther (Hologic Gen-Probe) platforms using second-generation assays for Chlamydia trachomatis and Neisseria gonorrhoeae. A combination of operational and maintenance steps requiring manual labor showed that Panther had the shortest overall hands-on times and Viper XTR the longest. Both Panther and Tigris showed greater efficiency whether 96 or 192 tests were processed. Viper XTR and Panther had the shortest times to results and m2000 RealTime the longest. Sample preparation and loading time was the shortest for Panther and longest for cobas 4800. Mandatory return visits were required only for m2000 RealTime and cobas 4800 when 96 tests were processed, and both required substantially more hands-on time than the other systems due to increased numbers of return visits when 192 tests were processed. These results show that there are substantial differences in the amount of labor required to operate each system. Assay performance, instrumentation, testing capacity, workflow, maintenance, and reagent costs should be considered in choosing a system. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  9. Parallel Workflow for High-Throughput (>1,000 Samples/Day) Quantitative Analysis of Human Insulin-Like Growth Factor 1 Using Mass Spectrometric Immunoassay

    PubMed Central

    Oran, Paul E.; Trenchevska, Olgica; Nedelkov, Dobrin; Borges, Chad R.; Schaab, Matthew R.; Rehder, Douglas S.; Jarvis, Jason W.; Sherma, Nisha D.; Shen, Luhui; Krastins, Bryan; Lopez, Mary F.; Schwenke, Dawn C.; Reaven, Peter D.; Nelson, Randall W.

    2014-01-01

    Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications. PMID:24664114

  10. A streamlined method for analysing genome-wide DNA methylation patterns from low amounts of FFPE DNA.

    PubMed

    Ludgate, Jackie L; Wright, James; Stockwell, Peter A; Morison, Ian M; Eccles, Michael R; Chatterjee, Aniruddha

    2017-08-31

    Formalin fixed paraffin embedded (FFPE) tumor samples are a major source of DNA from patients in cancer research. However, FFPE is a challenging material to work with due to macromolecular fragmentation and nucleic acid crosslinking. FFPE tissue particularly possesses challenges for methylation analysis and for preparing sequencing-based libraries relying on bisulfite conversion. Successful bisulfite conversion is a key requirement for sequencing-based methylation analysis. Here we describe a complete and streamlined workflow for preparing next generation sequencing libraries for methylation analysis from FFPE tissues. This includes, counting cells from FFPE blocks and extracting DNA from FFPE slides, testing bisulfite conversion efficiency with a polymerase chain reaction (PCR) based test, preparing reduced representation bisulfite sequencing libraries and massively parallel sequencing. The main features and advantages of this protocol are: An optimized method for extracting good quality DNA from FFPE tissues. An efficient bisulfite conversion and next generation sequencing library preparation protocol that uses 50 ng DNA from FFPE tissue. Incorporation of a PCR-based test to assess bisulfite conversion efficiency prior to sequencing. We provide a complete workflow and an integrated protocol for performing DNA methylation analysis at the genome-scale and we believe this will facilitate clinical epigenetic research that involves the use of FFPE tissue.

  11. Quantitation of heat-shock proteins in clinical samples using mass spectrometry.

    PubMed

    Kaur, Punit; Asea, Alexzander

    2011-01-01

    Mass spectrometry (MS) is a powerful analytical tool for proteomics research and drug and biomarker discovery. MS enables identification and quantification of known and unknown compounds by revealing their structural and chemical properties. Proper sample preparation for MS-based analysis is a critical step in the proteomics workflow because the quality and reproducibility of sample extraction and preparation for downstream analysis significantly impact the separation and identification capabilities of mass spectrometers. The highly expressed proteins represent potential biomarkers that could aid in diagnosis, therapy, or drug development. Because the proteome is so complex, there is no one standard method for preparing protein samples for MS analysis. Protocols differ depending on the type of sample, source, experiment, and method of analysis. Molecular chaperones play significant roles in almost all biological functions due to their capacity for detecting intracellular denatured/unfolded proteins, initiating refolding or denaturation of such malfolded protein sequences and more recently for their role in the extracellular milieu as chaperokines. In this chapter, we describe the latest techniques for quantitating the expression of molecular chaperones in human clinical samples.

  12. The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.

    PubMed

    Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark

    2018-07-01

    The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.

  13. How iSamples (Internet of Samples in the Earth Sciences) Improves Sample and Data Stewardship in the Next Generation of Geoscientists

    NASA Astrophysics Data System (ADS)

    Hallett, B. W.; Dere, A. L. D.; Lehnert, K.; Carter, M.

    2016-12-01

    Vast numbers of physical samples are routinely collected by geoscientists to probe key scientific questions related to global climate change, biogeochemical cycles, magmatic processes, mantle dynamics, etc. Despite their value as irreplaceable records of nature the majority of these samples remain undiscoverable by the broader scientific community because they lack a digital presence or are not well-documented enough to facilitate their discovery and reuse for future scientific and educational use. The NSF EarthCube iSamples Research Coordination Network seeks to develop a unified approach across all Earth Science disciplines for the registration, description, identification, and citation of physical specimens in order to take advantage of the new opportunities that cyberinfrastructure offers. Even as consensus around best practices begins to emerge, such as the use of the International Geo Sample Number (IGSN), more work is needed to communicate these practices to investigators to encourage widespread adoption. Recognizing the importance of students and early career scientists in particular to transforming data and sample management practices, the iSamples Education and Training Working Group is developing training modules for sample collection, documentation, and management workflows. These training materials are made available to educators/research supervisors online at http://earthcube.org/group/isamples and can be modularized for supervisors to create a customized research workflow. This study details the design and development of several sample management tutorials, created by early career scientists and documented in collaboration with undergraduate research students in field and lab settings. Modules under development focus on rock outcrops, rock cores, soil cores, and coral samples, with an emphasis on sample management throughout the collection, analysis and archiving process. We invite others to share their sample management/registration workflows and to develop training modules. This educational approach, with evolving digital materials, can help prepare future scientists to perform research in a way that will contribute to EarthCube data integration and discovery.

  14. Rapid microscale in-gel processing and digestion of proteins using surface acoustic waves.

    PubMed

    Kulkarni, Ketav P; Ramarathinam, Sri H; Friend, James; Yeo, Leslie; Purcell, Anthony W; Perlmutter, Patrick

    2010-06-21

    A new method for in-gel sample processing and tryptic digestion of proteins is described. Sample preparation, rehydration, in situ digestion and peptide extraction from gel slices are dramatically accelerated by treating the gel slice with surface acoustic waves (SAWs). Only 30 minutes total workflow time is required for this new method to produce base peak chromatograms (BPCs) of similar coverage and intensity to those observed for traditional processing and overnight digestion. Simple set up, good reproducibility, excellent peptide recoveries, rapid turnover of samples and high confidence protein identifications put this technology at the fore-front of the next generation of proteomics sample processing tools.

  15. Clinical Utility of an Automated Instrument for Gram Staining Single Slides ▿

    PubMed Central

    Baron, Ellen Jo; Mix, Samantha; Moradi, Wais

    2010-01-01

    Gram stains of 87 different clinical samples were prepared by the laboratory's conventional methods (automated or manual) and by a new single-slide-type automated staining instrument, GG&B AGS-1000. Gram stains from either heat- or methanol-fixed slides stained with the new instrument were easy to interpret, and results were essentially the same as those from the methanol-fixed slides prepared as a part of the routine workflow. This instrument is well suited to a rapid-response laboratory where Gram stain requests are commonly received on a stat basis. PMID:20410348

  16. A Fast Solution to NGS Library Prep with Low Nanogram DNA Input

    PubMed Central

    Liu, Pingfang; Lohman, Gregory J.S.; Cantor, Eric; Langhorst, Bradley W.; Yigit, Erbay; Apone, Lynne M.; Munafo, Daniela B.; Stewart, Fiona J.; Evans, Thomas C.; Nichols, Nicole; Dimalanta, Eileen T.; Davis, Theodore B.; Sumner, Christine

    2013-01-01

    Next Generation Sequencing (NGS) has significantly impacted human genetics, enabling a comprehensive characterization of the human genome as well as a better understanding of many genomic abnormalities. By delivering massive DNA sequences at unprecedented speed and cost, NGS promises to make personalized medicine a reality in the foreseeable future. To date, library construction with clinical samples has been a challenge, primarily due to the limited quantities of sample DNA available. Our objective here was to overcome this challenge by developing NEBNext® Ultra DNA Library Prep Kit, a fast library preparation method. Specifically, we streamlined the workflow utilizing novel NEBNext reagents and adaptors, including a new DNA polymerase that has been optimized to minimize GC bias. As a result of this work, we have developed a simple method for library construction from an amount of DNA as low as 5 ng, which can be used for both intact and fragmented DNA. Moreover, the workflow is compatible with multiple NGS platforms.

  17. Analysis of acute brain slices by electron microscopy: a correlative light-electron microscopy workflow based on Tokuyasu cryo-sectioning.

    PubMed

    Loussert Fonta, Celine; Leis, Andrew; Mathisen, Cliff; Bouvier, David S; Blanchard, Willy; Volterra, Andrea; Lich, Ben; Humbel, Bruno M

    2015-01-01

    Acute brain slices are slices of brain tissue that are kept vital in vitro for further recordings and analyses. This tool is of major importance in neurobiology and allows the study of brain cells such as microglia, astrocytes, neurons and their inter/intracellular communications via ion channels or transporters. In combination with light/fluorescence microscopies, acute brain slices enable the ex vivo analysis of specific cells or groups of cells inside the slice, e.g. astrocytes. To bridge ex vivo knowledge of a cell with its ultrastructure, we developed a correlative microscopy approach for acute brain slices. The workflow begins with sampling of the tissue and precise trimming of a region of interest, which contains GFP-tagged astrocytes that can be visualised by fluorescence microscopy of ultrathin sections. The astrocytes and their surroundings are then analysed by high resolution scanning transmission electron microscopy (STEM). An important aspect of this workflow is the modification of a commercial cryo-ultramicrotome to observe the fluorescent GFP signal during the trimming process. It ensured that sections contained at least one GFP astrocyte. After cryo-sectioning, a map of the GFP-expressing astrocytes is established and transferred to correlation software installed on a focused ion beam scanning electron microscope equipped with a STEM detector. Next, the areas displaying fluorescence are selected for high resolution STEM imaging. An overview area (e.g. a whole mesh of the grid) is imaged with an automated tiling and stitching process. In the final stitched image, the local organisation of the brain tissue can be surveyed or areas of interest can be magnified to observe fine details, e.g. vesicles or gold labels on specific proteins. The robustness of this workflow is contingent on the quality of sample preparation, based on Tokuyasu's protocol. This method results in a reasonable compromise between preservation of morphology and maintenance of antigenicity. Finally, an important feature of this approach is that the fluorescence of the GFP signal is preserved throughout the entire preparation process until the last step before electron microscopy. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Evaluation of the Illumigene Malaria LAMP: A Robust Molecular Diagnostic Tool for Malaria Parasites

    PubMed Central

    Lucchi, Naomi W.; Gaye, Marie; Diallo, Mammadou Alpha; Goldman, Ira F.; Ljolje, Dragan; Deme, Awa Bineta; Badiane, Aida; Ndiaye, Yaye Die; Barnwell, John W.; Udhayakumar, Venkatachalam; Ndiaye, Daouda

    2016-01-01

    Isothermal nucleic acid amplification assays such as the loop mediated isothermal amplification (LAMP), are well suited for field use as they do not require thermal cyclers to amplify the DNA. To further facilitate the use of LAMP assays in remote settings, simpler sample preparation methods and lyophilized reagents are required. The performance of a commercial malaria LAMP assay (Illumigene Malaria LAMP) was evaluated using two sample preparation workflows (simple filtration prep (SFP)) and gravity-driven filtration prep (GFP)) and pre-dispensed lyophilized reagents. Laboratory and clinical samples were tested in a field laboratory in Senegal and the results independently confirmed in a reference laboratory in the U.S.A. The Illumigene Malaria LAMP assay was easily implemented in the clinical laboratory and gave similar results to a real-time PCR reference test with limits of detection of ≤2.0 parasites/μl depending on the sample preparation method used. This assay reliably detected Plasmodium sp. parasites in a simple low-tech format, providing a much needed alternative to the more complex molecular tests for malaria diagnosis. PMID:27827432

  19. Spatially-Resolved Proteomics: Rapid Quantitative Analysis of Laser Capture Microdissected Alveolar Tissue Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clair, Geremy; Piehowski, Paul D.; Nicola, Teodora

    Global proteomics approaches allow characterization of whole tissue lysates to an impressive depth. However, it is now increasingly recognized that to better understand the complexity of multicellular organisms, global protein profiling of specific spatially defined regions/substructures of tissues (i.e. spatially-resolved proteomics) is essential. Laser capture microdissection (LCM) enables microscopic isolation of defined regions of tissues preserving crucial spatial information. However, current proteomics workflows entail several manual sample preparation steps and are challenged by the microscopic mass-limited samples generated by LCM, and that impact measurement robustness, quantification, and throughput. Here, we coupled LCM with a fully automated sample preparation workflow thatmore » with a single manual step allows: protein extraction, tryptic digestion, peptide cleanup and LC-MS/MS analysis of proteomes from microdissected tissues. Benchmarking against the current state of the art in ultrasensitive global proteomic analysis, our approach demonstrated significant improvements in quantification and throughput. Using our LCM-SNaPP proteomics approach, we characterized to a depth of more than 3,400 proteins, the ontogeny of protein changes during normal lung development in laser capture microdissected alveolar tissue containing ~4,000 cells per sample. Importantly, the data revealed quantitative changes for 350 low abundance transcription factors and signaling molecules, confirming earlier transcript-level observations and defining seven modules of coordinated transcription factor/signaling molecule expression patterns, suggesting that a complex network of temporal regulatory control directs normal lung development with epigenetic regulation fine-tuning pre-natal developmental processes. Our LCM-proteomics approach facilitates efficient, spatially-resolved, ultrasensitive global proteomics analyses in high-throughput that will be enabling for several clinical and biological applications.« less

  20. Developing integrated workflows for the digitisation of herbarium specimens using a modular and scalable approach.

    PubMed

    Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J

    2012-01-01

    Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow.The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow.

  1. Developing integrated workflows for the digitisation of herbarium specimens using a modular and scalable approach

    PubMed Central

    Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J

    2012-01-01

    Abstract Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow. The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow. PMID:22859881

  2. An Internal Standard for Assessing Phosphopeptide Recovery from Metal Ion/Oxide Enrichment Strategies

    NASA Astrophysics Data System (ADS)

    Paulo, Joao A.; Navarrete-Perea, Jose; Erickson, Alison R.; Knott, Jeffrey; Gygi, Steven P.

    2018-04-01

    Phosphorylation-mediated signaling pathways have major implications in cellular regulation and disease. However, proteins with roles in these pathways are frequently less abundant and phosphorylation is often sub-stoichiometric. As such, the efficient enrichment, and subsequent recovery of phosphorylated peptides, is vital. Mass spectrometry-based proteomics is a well-established approach for quantifying thousands of phosphorylation events in a single experiment. We designed a peptide internal standard-based assay directed toward sample preparation strategies for mass spectrometry analysis to understand better phosphopeptide recovery from enrichment strategies. We coupled mass-differential tandem mass tag (mTMT) reagents (specifically, TMTzero and TMTsuper-heavy), nine mass spectrometry-amenable phosphopeptides (phos9), and peak area measurements from extracted ion chromatograms to determine phosphopeptide recovery. We showcase this mTMT/phos9 recovery assay by evaluating three phosphopeptide enrichment workflows. Our assay provides data on the recovery of phosphopeptides, which complement other metrics, namely the number of identified phosphopeptides and enrichment specificity. Our mTMT/phos9 assay is applicable to any enrichment protocol in a typical experimental workflow irrespective of sample origin or labeling strategy. [Figure not available: see fulltext.

  3. Simplification and improvement of protein detection in two-dimensional electrophoresis gels with SERVA HPE™ lightning red.

    PubMed

    Griebel, Anja; Obermaier, Christian; Westermeier, Reiner; Moche, Martin; Büttner, Knut

    2013-07-01

    A new fluorescent amino-reactive dye has been tested for both labelling proteins prior to electrophoretic separations and between the two steps of two-dimensional electrophoresis. A series of experiments showed, that the labelling of lysines with this dye is compatible with all standard additives used for sample preparation, including reducing substances and carrier ampholytes. Using this dye for pre-labelling considerably simplifies the electrophoresis and detection workflow and provides highly sensitive and quantitative visualisation of proteins.

  4. AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.

    New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less

  5. Applying lean principles to continuous renal replacement therapy processes.

    PubMed

    Benfield, C Brett; Brummond, Philip; Lucarotti, Andrew; Villarreal, Maria; Goodwin, Adam; Wonnacott, Rob; Talley, Cheryl; Heung, Michael

    2015-02-01

    The application of lean principles to continuous renal replacement therapy (CRRT) processes in an academic medical center is described. A manual audit over six consecutive weeks revealed that 133 5-L bags of CRRT solution were discarded after being dispensed from pharmacy but before clinical use. Lean principles were used to examine the workflow for CRRT preparation and develop and implement an intervention. An educational program was developed to encourage and enhance direct communication between nursing and pharmacy about changes in a patient's condition or CRRT order. It was through this education program that the reordering workflow shifted from nurses to pharmacy technicians. The primary outcome was the number of CRRT solution bags delivered in the preintervention and postintervention periods. Nurses and pharmacy technicians were surveyed to determine their satisfaction with the workflow change. After implementation of lean principles, the mean number of CRRT solution bags dispensed per day of CRRT decreased substantially. Respondents' overall satisfaction with the CRRT solution preparation process increased during the postintervention period, and the satisfaction scores for each individual component of the workflow after implementation of lean principles. The decreased solution waste resulted in projected annual cost savings exceeding $70,000 in product alone. The use of lean principles to identify medication waste in the CRRT workflow and implementation of an intervention to shift the workload from intensive care unit nurses to pharmacy technicians led to reduced CRRT solution waste, improved efficiency of CRRT workflow, and increased satisfaction among staff. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  6. Delivery of femtolitre droplets using surface acoustic wave based atomisation for cryo-EM grid preparation.

    PubMed

    Ashtiani, Dariush; Venugopal, Hari; Belousoff, Matthew; Spicer, Bradley; Mak, Johnson; Neild, Adrian; de Marco, Alex

    2018-04-06

    Cryo-Electron Microscopy (cryo-EM) has become an invaluable tool for structural biology. Over the past decade, the advent of direct electron detectors and automated data acquisition has established cryo-EM as a central method in structural biology. However, challenges remain in the reliable and efficient preparation of samples in a manner which is compatible with high time resolution. The delivery of sample onto the grid is recognized as a critical step in the workflow as it is a source of variability and loss of material due to the blotting which is usually required. Here, we present a method for sample delivery and plunge freezing based on the use of Surface Acoustic Waves to deploy 6-8 µm droplets to the EM grid. This method minimises the sample dead volume and ensures vitrification within 52.6 ms from the moment the sample leaves the microfluidics chip. We demonstrate a working protocol to minimize the atomised volume and apply it to plunge freeze three different samples and provide proof that no damage occurs due to the interaction between the sample and the acoustic waves. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Scrambled eggs: A highly sensitive molecular diagnostic workflow for Fasciola species specific detection from faecal samples.

    PubMed

    Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David; Šlapeta, Jan

    2017-09-01

    Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74-0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65-88%), compared to the sensitivity (91-100%) of the new molecular diagnostic workflow. Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited.

  8. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography.

    PubMed

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M F; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A; Gotz, Andy

    2012-08-01

    The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal reorientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE.

  9. Quantifying nursing workflow in medication administration.

    PubMed

    Keohane, Carol A; Bane, Anne D; Featherstone, Erica; Hayes, Judy; Woolf, Seth; Hurley, Ann; Bates, David W; Gandhi, Tejal K; Poon, Eric G

    2008-01-01

    New medication administration systems are showing promise in improving patient safety at the point of care, but adoption of these systems requires significant changes in nursing workflow. To prepare for these changes, the authors report on a time-motion study that measured the proportion of time that nurses spend on various patient care activities, focusing on medication administration-related activities. Implications of their findings are discussed.

  10. Errors detected in pediatric oral liquid medication doses prepared in an automated workflow management system.

    PubMed

    Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan

    2018-02-01

    The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  11. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  12. An approach to optimize sample preparation for MALDI imaging MS of FFPE sections using fractional factorial design of experiments.

    PubMed

    Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter

    2016-09-01

    A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.

  13. Scrambled eggs: A highly sensitive molecular diagnostic workflow for Fasciola species specific detection from faecal samples

    PubMed Central

    Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David

    2017-01-01

    Background Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. Methodology/Principal findings A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74–0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65–88%), compared to the sensitivity (91–100%) of the new molecular diagnostic workflow. Conclusions/Significance Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited. PMID:28915255

  14. Development of a Multiplexed Liquid Chromatography Multiple-Reaction-Monitoring Mass Spectrometry (LC-MRM/MS) Method for Evaluation of Salivary Proteins as Oral Cancer Biomarkers*

    PubMed Central

    Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting

    2017-01-01

    Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. PMID:28235782

  15. Building asynchronous geospatial processing workflows with web services

    NASA Astrophysics Data System (ADS)

    Zhao, Peisheng; Di, Liping; Yu, Genong

    2012-02-01

    Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.

  16. A workflow to preserve genome-quality tissue samples from plants in botanical gardens and arboreta.

    PubMed

    Gostel, Morgan R; Kelloff, Carol; Wallick, Kyle; Funk, Vicki A

    2016-09-01

    Internationally, gardens hold diverse living collections that can be preserved for genomic research. Workflows have been developed for genomic tissue sampling in other taxa (e.g., vertebrates), but are inadequate for plants. We outline a workflow for tissue sampling intended for two audiences: botanists interested in genomics research and garden staff who plan to voucher living collections. Standard herbarium methods are used to collect vouchers, label information and images are entered into a publicly accessible database, and leaf tissue is preserved in silica and liquid nitrogen. A five-step approach for genomic tissue sampling is presented for sampling from living collections according to current best practices. Collecting genome-quality samples from gardens is an economical and rapid way to make available for scientific research tissue from the diversity of plants on Earth. The Global Genome Initiative will facilitate and lead this endeavor through international partnerships.

  17. Spatially resolved proteome mapping of laser capture microdissected tissue with automated sample transfer to nanodroplets.

    PubMed

    Zhu, Ying; Dou, Maowei; Piehowski, Paul D; Liang, Yiran; Wang, Fangjun; Chu, Rosalie K; Chrisler, Will; Smith, Jordan N; Schwarz, Kaitlynn C; Shen, Yufeng; Shukla, Anil K; Moore, Ronald J; Smith, Richard D; Qian, Wei-Jun; Kelly, Ryan T

    2018-06-24

    Current mass spectrometry (MS)-based proteomics approaches are ineffective for mapping protein expression in tissue sections with high spatial resolution due to the limited overall sensitivity of conventional workflows. Here we report an integrated and automated method to advance spatially resolved proteomics by seamlessly coupling laser capture microdissection (LCM) with a recently developed nanoliter-scale sample preparation system termed nanoPOTS (Nanodroplet Processing in One pot for Trace Samples). The workflow is enabled by prepopulating nanowells with DMSO, which serves as a sacrificial capture liquid for microdissected tissues. The DMSO droplets efficiently collect laser-pressure catapulted LCM tissues as small as 20 µm in diameter with success rates >87%. We also demonstrate that tissue treatment with DMSO can significantly improve proteome coverage, likely due to its ability to dissolve lipids from tissue and enhance protein extraction efficiency. The LCM-nanoPOTS platform was able to identify 180, 695, and 1827 protein groups on average from 12-µm-thick rat brain cortex tissue sections with diameters of 50, 100, and 200 µm, respectively. We also analyzed 100-µm-diameter sections corresponding to 10-18 cells from three different regions of rat brain and comparatively quantified ~1000 proteins, demonstrating the potential utility for high-resolution spatially resolved mapping of protein expression in tissues. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  18. Using lean principles to improve outpatient adult infusion clinic chemotherapy preparation turnaround times.

    PubMed

    Lamm, Matthew H; Eckel, Stephen; Daniels, Rowell; Amerine, Lindsey B

    2015-07-01

    The workflow and chemotherapy preparation turnaround times at an adult infusion clinic were evaluated to identify opportunities to optimize workflow and efficiency. A three-phase study using Lean Six Sigma methodology was conducted. In phase 1, chemotherapy turnaround times in the adult infusion clinic were examined one year after the interim goal of a 45-minute turnaround time was established. Phase 2 implemented various experiments including a five-day Kaizen event, using lean principles in an effort to decrease chemotherapy preparation turnaround times in a controlled setting. Phase 3 included the implementation of process-improvement strategies identified during the Kaizen event, coupled with a final refinement of operational processes. In phase 1, the mean turnaround time for all chemotherapy preparations decreased from 60 to 44 minutes, and a mean of 52 orders for adult outpatient chemotherapy infusions was received each day. After installing new processes, the mean turnaround time had improved to 37 minutes for each chemotherapy preparation in phase 2. In phase 3, the mean turnaround time decreased from 37 to 26 minutes. The overall mean turnaround time was reduced by 26 minutes, representing a 57% decrease in turnaround times in 19 months through the elimination of waste and the implementation of lean principles. This reduction was accomplished through increased efficiencies in the workplace, with no addition of human resources. Implementation of Lean Six Sigma principles improved workflow and efficiency at an adult infusion clinic and reduced the overall chemotherapy turnaround times from 60 to 26 minutes. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  19. Multiplex ligation-dependent probe amplification analysis on capillary electrophoresis instruments for a rapid gene copy number study.

    PubMed

    Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy

    2008-09-01

    Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies.

  20. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    PubMed

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  1. Next-generation sequencing meets genetic diagnostics: development of a comprehensive workflow for the analysis of BRCA1 and BRCA2 genes

    PubMed Central

    Feliubadaló, Lídia; Lopez-Doriga, Adriana; Castellsagué, Ester; del Valle, Jesús; Menéndez, Mireia; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Gómez, Carolina; Campos, Olga; Pineda, Marta; González, Sara; Moreno, Victor; Brunet, Joan; Blanco, Ignacio; Serra, Eduard; Capellá, Gabriel; Lázaro, Conxi

    2013-01-01

    Next-generation sequencing (NGS) is changing genetic diagnosis due to its huge sequencing capacity and cost-effectiveness. The aim of this study was to develop an NGS-based workflow for routine diagnostics for hereditary breast and ovarian cancer syndrome (HBOCS), to improve genetic testing for BRCA1 and BRCA2. A NGS-based workflow was designed using BRCA MASTR kit amplicon libraries followed by GS Junior pyrosequencing. Data analysis combined Variant Identification Pipeline freely available software and ad hoc R scripts, including a cascade of filters to generate coverage and variant calling reports. A BRCA homopolymer assay was performed in parallel. A research scheme was designed in two parts. A Training Set of 28 DNA samples containing 23 unique pathogenic mutations and 213 other variants (33 unique) was used. The workflow was validated in a set of 14 samples from HBOCS families in parallel with the current diagnostic workflow (Validation Set). The NGS-based workflow developed permitted the identification of all pathogenic mutations and genetic variants, including those located in or close to homopolymers. The use of NGS for detecting copy-number alterations was also investigated. The workflow meets the sensitivity and specificity requirements for the genetic diagnosis of HBOCS and improves on the cost-effectiveness of current approaches. PMID:23249957

  2. Development of a Multiplexed Liquid Chromatography Multiple-Reaction-Monitoring Mass Spectrometry (LC-MRM/MS) Method for Evaluation of Salivary Proteins as Oral Cancer Biomarkers.

    PubMed

    Chen, Yi-Ting; Chen, Hsiao-Wei; Wu, Chun-Feng; Chu, Lichieh Julie; Chiang, Wei-Fang; Wu, Chih-Ching; Yu, Jau-Song; Tsai, Cheng-Han; Liang, Kung-Hao; Chang, Yu-Sun; Wu, Maureen; Ou Yang, Wei-Ting

    2017-05-01

    Multiple (selected) reaction monitoring (MRM/SRM) of peptides is a growing technology for target protein quantification because it is more robust, precise, accurate, high-throughput, and multiplex-capable than antibody-based techniques. The technique has been applied clinically to the large-scale quantification of multiple target proteins in different types of fluids. However, previous MRM-based studies have placed less focus on sample-preparation workflow and analytical performance in the precise quantification of proteins in saliva, a noninvasively sampled body fluid. In this study, we evaluated the analytical performance of a simple and robust multiple reaction monitoring (MRM)-based targeted proteomics approach incorporating liquid chromatography with mass spectrometry detection (LC-MRM/MS). This platform was used to quantitatively assess the biomarker potential of a group of 56 salivary proteins that have previously been associated with human cancers. To further enhance the development of this technology for assay of salivary samples, we optimized the workflow for salivary protein digestion and evaluated quantification performance, robustness and technical limitations in analyzing clinical samples. Using a clinically well-characterized cohort of two independent clinical sample sets (total n = 119), we quantitatively characterized these protein biomarker candidates in saliva specimens from controls and oral squamous cell carcinoma (OSCC) patients. The results clearly showed a significant elevation of most targeted proteins in saliva samples from OSCC patients compared with controls. Overall, this platform was capable of assaying the most highly multiplexed panel of salivary protein biomarkers, highlighting the clinical utility of MRM in oral cancer biomarker research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  3. A workflow to preserve genome-quality tissue samples from plants in botanical gardens and arboreta1

    PubMed Central

    Gostel, Morgan R.; Kelloff, Carol; Wallick, Kyle; Funk, Vicki A.

    2016-01-01

    Premise of the study: Internationally, gardens hold diverse living collections that can be preserved for genomic research. Workflows have been developed for genomic tissue sampling in other taxa (e.g., vertebrates), but are inadequate for plants. We outline a workflow for tissue sampling intended for two audiences: botanists interested in genomics research and garden staff who plan to voucher living collections. Methods and Results: Standard herbarium methods are used to collect vouchers, label information and images are entered into a publicly accessible database, and leaf tissue is preserved in silica and liquid nitrogen. A five-step approach for genomic tissue sampling is presented for sampling from living collections according to current best practices. Conclusions: Collecting genome-quality samples from gardens is an economical and rapid way to make available for scientific research tissue from the diversity of plants on Earth. The Global Genome Initiative will facilitate and lead this endeavor through international partnerships. PMID:27672517

  4. The road map towards providing a robust Raman spectroscopy-based cancer diagnostic platform and integration into clinic

    NASA Astrophysics Data System (ADS)

    Lau, Katherine; Isabelle, Martin; Lloyd, Gavin R.; Old, Oliver; Shepherd, Neil; Bell, Ian M.; Dorney, Jennifer; Lewis, Aaran; Gaifulina, Riana; Rodriguez-Justo, Manuel; Kendall, Catherine; Stone, Nicolas; Thomas, Geraint; Reece, David

    2016-03-01

    Despite the demonstrated potential as an accurate cancer diagnostic tool, Raman spectroscopy (RS) is yet to be adopted by the clinic for histopathology reviews. The Stratified Medicine through Advanced Raman Technologies (SMART) consortium has begun to address some of the hurdles in its adoption for cancer diagnosis. These hurdles include awareness and acceptance of the technology, practicality of integration into the histopathology workflow, data reproducibility and availability of transferrable models. We have formed a consortium, in joint efforts, to develop optimised protocols for tissue sample preparation, data collection and analysis. These protocols will be supported by provision of suitable hardware and software tools to allow statistically sound classification models to be built and transferred for use on different systems. In addition, we are building a validated gastrointestinal (GI) cancers model, which can be trialled as part of the histopathology workflow at hospitals, and a classification tool. At the end of the project, we aim to deliver a robust Raman based diagnostic platform to enable clinical researchers to stage cancer, define tumour margin, build cancer diagnostic models and discover novel disease bio markers.

  5. A data-independent acquisition workflow for qualitative screening of new psychoactive substances in biological samples.

    PubMed

    Kinyua, Juliet; Negreira, Noelia; Ibáñez, María; Bijlsma, Lubertus; Hernández, Félix; Covaci, Adrian; van Nuijs, Alexander L N

    2015-11-01

    Identification of new psychoactive substances (NPS) is challenging. Developing targeted methods for their analysis can be difficult and costly due to their impermanence on the drug scene. Accurate-mass mass spectrometry (AMMS) using a quadrupole time-of-flight (QTOF) analyzer can be useful for wide-scope screening since it provides sensitive, full-spectrum MS data. Our article presents a qualitative screening workflow based on data-independent acquisition mode (all-ions MS/MS) on liquid chromatography (LC) coupled to QTOFMS for the detection and identification of NPS in biological matrices. The workflow combines and structures fundamentals of target and suspect screening data processing techniques in a structured algorithm. This allows the detection and tentative identification of NPS and their metabolites. We have applied the workflow to two actual case studies involving drug intoxications where we detected and confirmed the parent compounds ketamine, 25B-NBOMe, 25C-NBOMe, and several predicted phase I and II metabolites not previously reported in urine and serum samples. The screening workflow demonstrates the added value for the detection and identification of NPS in biological matrices.

  6. Multiplex Ligation-Dependent Probe Amplification Analysis on Capillary Electrophoresis Instruments for a Rapid Gene Copy Number Study

    PubMed Central

    Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy

    2008-01-01

    Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies. PMID:19137113

  7. ToxCast Data Generation: Chemical Workflow

    EPA Pesticide Factsheets

    This page describes the process EPA follows to select chemicals, procure chemicals, register chemicals, conduct a quality review of the chemicals, and prepare the chemicals for high-throughput screening.

  8. Fully Automated Sample Preparation for Ultrafast N-Glycosylation Analysis of Antibody Therapeutics.

    PubMed

    Szigeti, Marton; Lew, Clarence; Roby, Keith; Guttman, Andras

    2016-04-01

    There is a growing demand in the biopharmaceutical industry for high-throughput, large-scale N-glycosylation profiling of therapeutic antibodies in all phases of product development, but especially during clone selection when hundreds of samples should be analyzed in a short period of time to assure their glycosylation-based biological activity. Our group has recently developed a magnetic bead-based protocol for N-glycosylation analysis of glycoproteins to alleviate the hard-to-automate centrifugation and vacuum-centrifugation steps of the currently used protocols. Glycan release, fluorophore labeling, and cleanup were all optimized, resulting in a <4 h magnetic bead-based process with excellent yield and good repeatability. This article demonstrates the next level of this work by automating all steps of the optimized magnetic bead-based protocol from endoglycosidase digestion, through fluorophore labeling and cleanup with high-throughput sample processing in 96-well plate format, using an automated laboratory workstation. Capillary electrophoresis analysis of the fluorophore-labeled glycans was also optimized for rapid (<3 min) separation to accommodate the high-throughput processing of the automated sample preparation workflow. Ultrafast N-glycosylation analyses of several commercially relevant antibody therapeutics are also shown and compared to their biosimilar counterparts, addressing the biological significance of the differences. © 2015 Society for Laboratory Automation and Screening.

  9. A proteomics performance standard to support measurement quality in proteomics.

    PubMed

    Beasley-Green, Ashley; Bunk, David; Rudnick, Paul; Kilpatrick, Lisa; Phinney, Karen

    2012-04-01

    The emergence of MS-based proteomic platforms as a prominent technology utilized in biochemical and biomedical research has increased the need for high-quality MS measurements. To address this need, National Institute of Standards and Technology (NIST) reference material (RM) 8323 yeast protein extract is introduced as a proteomics quality control material for benchmarking the preanalytical and analytical performance of proteomics-based experimental workflows. RM 8323 yeast protein extract is based upon the well-characterized eukaryote Saccharomyces cerevisiae and can be utilized in the design and optimization of proteomics-based methodologies from sample preparation to data analysis. To demonstrate its utility as a proteomics quality control material, we coupled LC-MS/MS measurements of RM 8323 with the NIST MS Quality Control (MSQC) performance metrics to quantitatively assess the LC-MS/MS instrumentation parameters that influence measurement accuracy, repeatability, and reproducibility. Due to the complexity of the yeast proteome, we also demonstrate how NIST RM 8323, along with the NIST MSQC performance metrics, can be used in the evaluation and optimization of proteomics-based sample preparation methods. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A framework for streamlining research workflow in neuroscience and psychology

    PubMed Central

    Kubilius, Jonas

    2014-01-01

    Successful accumulation of knowledge is critically dependent on the ability to verify and replicate every part of scientific conduct. However, such principles are difficult to enact when researchers continue to resort on ad-hoc workflows and with poorly maintained code base. In this paper I examine the needs of neuroscience and psychology community, and introduce psychopy_ext, a unifying framework that seamlessly integrates popular experiment building, analysis and manuscript preparation tools by choosing reasonable defaults and implementing relatively rigid patterns of workflow. This structure allows for automation of multiple tasks, such as generated user interfaces, unit testing, control analyses of stimuli, single-command access to descriptive statistics, and publication quality plotting. Taken together, psychopy_ext opens an exciting possibility for a faster, more robust code development and collaboration for researchers. PMID:24478691

  11. Proteomics Analysis of Skeletal Muscle from Leptin-Deficient ob/ob Mice Reveals Adaptive Remodeling of Metabolic Characteristics and Fiber Type Composition.

    PubMed

    Schönke, Milena; Björnholm, Marie; Chibalin, Alexander V; Zierath, Juleen R; Deshmukh, Atul S

    2018-03-01

    Skeletal muscle insulin resistance, an early metabolic defect in the pathogenesis of type 2 diabetes (T2D), may be a cause or consequence of altered protein expression profiles. Proteomics technology offers enormous promise to investigate molecular mechanisms underlying pathologies, however, the analysis of skeletal muscle is challenging. Using state-of-the-art multienzyme digestion and filter-aided sample preparation (MED-FASP) and a mass spectrometry (MS)-based workflow, we performed a global proteomics analysis of skeletal muscle from leptin-deficient, obese, insulin resistant (ob/ob) and lean mice in mere two fractions in a short time (8 h per sample). We identified more than 6000 proteins with 118 proteins differentially regulated in obesity. This included protein kinases, phosphatases, and secreted and fiber type associated proteins. Enzymes involved in lipid metabolism in skeletal muscle from ob/ob mice were increased, providing evidence against reduced fatty acid oxidation in lipid-induced insulin resistance. Mitochondrial and peroxisomal proteins, as well as components of pyruvate and lactate metabolism, were increased. Finally, the skeletal muscle proteome from ob/ob mice displayed a shift toward the "slow fiber type." This detailed characterization of an obese rodent model of T2D demonstrates an efficient workflow for skeletal muscle proteomics, which may easily be adapted to other complex tissues. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Destination bedside: using research findings to visualize optimal unit layouts and health information technology in support of bedside care.

    PubMed

    Watkins, Nicholas; Kennedy, Mary; Lee, Nelson; O'Neill, Michael; Peavey, Erin; Ducharme, Maria; Padula, Cynthia

    2012-05-01

    This study explored the impact of unit design and healthcare information technology (HIT) on nursing workflow and patient-centered care (PCC). Healthcare information technology and unit layout-related predictors of nursing workflow and PCC were measured during a 3-phase study involving questionnaires and work sampling methods. Stepwise multiple linear regressions demonstrated several HIT and unit layout-related factors that impact nursing workflow and PCC.

  13. Super-resolution for everybody: An image processing workflow to obtain high-resolution images with a standard confocal microscope.

    PubMed

    Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne

    2017-02-15

    In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Comparative Evaluation of Small Molecular Additives and Their Effects on Peptide/Protein Identification.

    PubMed

    Gao, Jing; Zhong, Shaoyun; Zhou, Yanting; He, Han; Peng, Shuying; Zhu, Zhenyun; Liu, Xing; Zheng, Jing; Xu, Bin; Zhou, Hu

    2017-06-06

    Detergents and salts are widely used in lysis buffers to enhance protein extraction from biological samples, facilitating in-depth proteomic analysis. However, these detergents and salt additives must be efficiently removed from the digested samples prior to LC-MS/MS analysis to obtain high-quality mass spectra. Although filter-aided sample preparation (FASP), acetone precipitation (AP), followed by in-solution digestion, and strong cation exchange-based centrifugal proteomic reactors (CPRs) are commonly used for proteomic sample processing, little is known about their efficiencies at removing detergents and salt additives. In this study, we (i) developed an integrative workflow for the quantification of small molecular additives in proteomic samples, developing a multiple reaction monitoring (MRM)-based LC-MS approach for the quantification of six additives (i.e., Tris, urea, CHAPS, SDS, SDC, and Triton X-100) and (ii) systematically evaluated the relationships between the level of additive remaining in samples following sample processing and the number of peptides/proteins identified by mass spectrometry. Although FASP outperformed the other two methods, the results were complementary in terms of peptide/protein identification, as well as the GRAVY index and amino acid distributions. This is the first systematic and quantitative study of the effect of detergents and salt additives on protein identification. This MRM-based approach can be used for an unbiased evaluation of the performance of new sample preparation methods. Data are available via ProteomeXchange under identifier PXD005405.

  15. Electrochemical pesticide detection with AutoDip--a portable platform for automation of crude sample analyses.

    PubMed

    Drechsel, Lisa; Schulz, Martin; von Stetten, Felix; Moldovan, Carmen; Zengerle, Roland; Paust, Nils

    2015-02-07

    Lab-on-a-chip devices hold promise for automation of complex workflows from sample to answer with minimal consumption of reagents in portable devices. However, complex, inhomogeneous samples as they occur in environmental or food analysis may block microchannels and thus often cause malfunction of the system. Here we present the novel AutoDip platform which is based on the movement of a solid phase through the reagents and sample instead of transporting a sequence of reagents through a fixed solid phase. A ball-pen mechanism operated by an external actuator automates unit operations such as incubation and washing by consecutively dipping the solid phase into the corresponding liquids. The platform is applied to electrochemical detection of organophosphorus pesticides in real food samples using an acetylcholinesterase (AChE) biosensor. Minimal sample preparation and an integrated reagent pre-storage module hold promise for easy handling of the assay. Detection of the pesticide chlorpyrifos-oxon (CPO) spiked into apple samples at concentrations of 10(-7) M has been demonstrated. This concentration is below the maximum residue level for chlorpyrifos in apples defined by the European Commission.

  16. Acoustic Sample Deposition MALDI-MS (ASD-MALDI-MS): A Novel Process Flow for Quality Control Screening of Compound Libraries.

    PubMed

    Chin, Jefferson; Wood, Elizabeth; Peters, Grace S; Drexler, Dieter M

    2016-02-01

    In the early stages of drug discovery, high-throughput screening (HTS) of compound libraries against pharmaceutical targets is a common method to identify potential lead molecules. For these HTS campaigns to be efficient and successful, continuous quality control of the compound collection is necessary and crucial. However, the large number of compound samples and the limited sample amount pose unique challenges. Presented here is a proof-of-concept study for a novel process flow for the quality control screening of small-molecule compound libraries that consumes only minimal amounts of samples and affords compound-specific molecular data. This process employs an acoustic sample deposition (ASD) technique for the offline sample preparation by depositing nanoliter volumes in an array format onto microscope glass slides followed by matrix-assisted laser desorption/ionization mass spectrometric (MALDI-MS) analysis. An initial study of a 384-compound array employing the ASD-MALDI-MS workflow resulted in a 75% first-pass positive identification rate with an analysis time of <1 s per sample. © 2015 Society for Laboratory Automation and Screening.

  17. Proteomic Workflows for Biomarker Identification Using Mass Spectrometry — Technical and Statistical Considerations during Initial Discovery

    PubMed Central

    Orton, Dennis J.; Doucette, Alan A.

    2013-01-01

    Identification of biomarkers capable of differentiating between pathophysiological states of an individual is a laudable goal in the field of proteomics. Protein biomarker discovery generally employs high throughput sample characterization by mass spectrometry (MS), being capable of identifying and quantifying thousands of proteins per sample. While MS-based technologies have rapidly matured, the identification of truly informative biomarkers remains elusive, with only a handful of clinically applicable tests stemming from proteomic workflows. This underlying lack of progress is attributed in large part to erroneous experimental design, biased sample handling, as well as improper statistical analysis of the resulting data. This review will discuss in detail the importance of experimental design and provide some insight into the overall workflow required for biomarker identification experiments. Proper balance between the degree of biological vs. technical replication is required for confident biomarker identification. PMID:28250400

  18. Comparison of sample preparation techniques and data analysis for the LC-MS/MS-based identification of proteins in human follicular fluid.

    PubMed

    Lehmann, Roland; Schmidt, André; Pastuschek, Jana; Müller, Mario M; Fritzsche, Andreas; Dieterle, Stefan; Greb, Robert R; Markert, Udo R; Slevogt, Hortense

    2018-06-25

    The proteomic analysis of complex body fluids by liquid chromatography tandem mass spectrometry (LC-MS/MS) analysis requires the selection of suitable sample preparation techniques and optimal parameter settings in data analysis software packages to obtain reliable results. Proteomic analysis of follicular fluid, as a representative of a complex body fluid similar to serum or plasma, is difficult as it contains a vast amount of high abundant proteins and a variety of proteins with different concentrations. However, the accessibility of this complex body fluid for LC-MS/MS analysis is an opportunity to gain insights into the status, the composition of fertility-relevant proteins including immunological factors or for the discovery of new diagnostic and prognostic markers for, for example, the treatment of infertility. In this study, we compared different sample preparation methods (FASP, eFASP and in-solution digestion) and three different data analysis software packages (Proteome Discoverer with SEQUEST, Mascot and MaxQuant with Andromeda) combined with semi- and full-tryptic databank search options to obtain a maximum coverage of the follicular fluid proteome. We found that the most comprehensive proteome coverage is achieved by the eFASP sample preparation method using SDS in the initial denaturing step and the SEQUEST-based semi-tryptic data analysis. In conclusion, we have developed a fractionation-free methodical workflow for in depth LC-MS/MS-based analysis for the standardized investigation of human follicle fluid as an important representative of a complex body fluid. Taken together, we were able to identify a total of 1392 proteins in follicular fluid. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Microfluidic-Mass Spectrometry Interfaces for Translational Proteomics.

    PubMed

    Pedde, R Daniel; Li, Huiyan; Borchers, Christoph H; Akbari, Mohsen

    2017-10-01

    Interfacing mass spectrometry (MS) with microfluidic chips (μchip-MS) holds considerable potential to transform a clinician's toolbox, providing translatable methods for the early detection, diagnosis, monitoring, and treatment of noncommunicable diseases by streamlining and integrating laborious sample preparation workflows on high-throughput, user-friendly platforms. Overcoming the limitations of competitive immunoassays - currently the gold standard in clinical proteomics - μchip-MS can provide unprecedented access to complex proteomic assays having high sensitivity and specificity, but without the labor, costs, and complexities associated with conventional MS sample processing. This review surveys recent μchip-MS systems for clinical applications and examines their emerging role in streamlining the development and translation of MS-based proteomic assays by alleviating many of the challenges that currently inhibit widespread clinical adoption. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  20. A practical workflow for making anatomical atlases for biological research.

    PubMed

    Wan, Yong; Lewis, A Kelsey; Colasanto, Mary; van Langeveld, Mark; Kardon, Gabrielle; Hansen, Charles

    2012-01-01

    The anatomical atlas has been at the intersection of science and art for centuries. These atlases are essential to biological research, but high-quality atlases are often scarce. Recent advances in imaging technology have made high-quality 3D atlases possible. However, until now there has been a lack of practical workflows using standard tools to generate atlases from images of biological samples. With certain adaptations, CG artists' workflow and tools, traditionally used in the film industry, are practical for building high-quality biological atlases. Researchers have developed a workflow for generating a 3D anatomical atlas using accessible artists' tools. They used this workflow to build a mouse limb atlas for studying the musculoskeletal system's development. This research aims to raise the awareness of using artists' tools in scientific research and promote interdisciplinary collaborations between artists and scientists. This video (http://youtu.be/g61C-nia9ms) demonstrates a workflow for creating an anatomical atlas.

  1. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    PubMed

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  2. Validation of the Applied Biosystems RapidFinder Shiga Toxin-Producing E. coli (STEC) Detection Workflow.

    PubMed

    Cloke, Jonathan; Matheny, Sharon; Swimley, Michelle; Tebbs, Robert; Burrell, Angelia; Flannery, Jonathan; Bastin, Benjamin; Bird, Patrick; Benzinger, M Joseph; Crowley, Erin; Agin, James; Goins, David; Salfinger, Yvonne; Brodsky, Michael; Fernandez, Maria Cristina

    2016-11-01

    The Applied Biosystems™ RapidFinder™ STEC Detection Workflow (Thermo Fisher Scientific) is a complete protocol for the rapid qualitative detection of Escherichia coli (E. coli) O157:H7 and the "Big 6" non-O157 Shiga-like toxin-producing E. coli (STEC) serotypes (defined as serogroups: O26, O45, O103, O111, O121, and O145). The RapidFinder STEC Detection Workflow makes use of either the automated preparation of PCR-ready DNA using the Applied Biosystems PrepSEQ™ Nucleic Acid Extraction Kit in conjunction with the Applied Biosystems MagMAX™ Express 96-well magnetic particle processor or the Applied Biosystems PrepSEQ Rapid Spin kit for manual preparation of PCR-ready DNA. Two separate assays comprise the RapidFinder STEC Detection Workflow, the Applied Biosystems RapidFinder STEC Screening Assay and the Applied Biosystems RapidFinder STEC Confirmation Assay. The RapidFinder STEC Screening Assay includes primers and probes to detect the presence of stx1 (Shiga toxin 1), stx2 (Shiga toxin 2), eae (intimin), and E. coli O157 gene targets. The RapidFinder STEC Confirmation Assay includes primers and probes for the "Big 6" non-O157 STEC and E. coli O157:H7. The use of these two assays in tandem allows a user to detect accurately the presence of the "Big 6" STECs and E. coli O157:H7. The performance of the RapidFinder STEC Detection Workflow was evaluated in a method comparison study, in inclusivity and exclusivity studies, and in a robustness evaluation. The assays were compared to the U.S. Department of Agriculture (USDA), Food Safety and Inspection Service (FSIS) Microbiology Laboratory Guidebook (MLG) 5.09: Detection, Isolation and Identification of Escherichia coli O157:H7 from Meat Products and Carcass and Environmental Sponges for raw ground beef (73% lean) and USDA/FSIS-MLG 5B.05: Detection, Isolation and Identification of Escherichia coli non-O157:H7 from Meat Products and Carcass and Environmental Sponges for raw beef trim. No statistically significant differences were observed between the reference method and the individual or combined kits forming the candidate assay using either of the DNA preparation kits (manual or automated extraction). For the inclusivity and exclusivity evaluation, the RapidFinder STEC Detection Workflow, comprising both RapidFinder STEC screening and confirmation kits, correctly identified all 50 target organism isolates and correctly excluded all 30 nontarget strains for both of the assays evaluated. The results of these studies demonstrate the sensitivity and selectivity of the RapidFinder STEC Detection Workflow for the detection of E. coli O157:H7 and the "Big 6" STEC serotypes in both raw ground beef and beef trim. The robustness testing demonstrated that minor variations in the method parameters did not impact the accuracy of the assay and highlighted the importance of following the correct incubation temperatures.

  3. A Web-Hosted R Workflow to Simplify and Automate the Analysis of 16S NGS Data

    EPA Science Inventory

    Next-Generation Sequencing (NGS) produces large data sets that include tens-of-thousands of sequence reads per sample. For analysis of bacterial diversity, 16S NGS sequences are typically analyzed in a workflow that containing best-of-breed bioinformatics packages that may levera...

  4. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  5. Preparation of Low-Input and Ligation-Free ChIP-seq Libraries Using Template-Switching Technology.

    PubMed

    Bolduc, Nathalie; Lehman, Alisa P; Farmer, Andrew

    2016-10-10

    Chromatin immunoprecipitation (ChIP) followed by high-throughput sequencing (ChIP-seq) has become the gold standard for mapping of transcription factors and histone modifications throughout the genome. However, for ChIP experiments involving few cells or targeting low-abundance transcription factors, the small amount of DNA recovered makes ligation of adapters very challenging. In this unit, we describe a ChIP-seq workflow that can be applied to small cell numbers, including a robust single-tube and ligation-free method for preparation of sequencing libraries from sub-nanogram amounts of ChIP DNA. An example ChIP protocol is first presented, resulting in selective enrichment of DNA-binding proteins and cross-linked DNA fragments immobilized on beads via an antibody bridge. This is followed by a protocol for fast and easy cross-linking reversal and DNA recovery. Finally, we describe a fast, ligation-free library preparation protocol, featuring DNA SMART technology, resulting in samples ready for Illumina sequencing. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  6. Preparation of metagenomic libraries from naturally occurring marine viruses.

    PubMed

    Solonenko, Sergei A; Sullivan, Matthew B

    2013-01-01

    Microbes are now well recognized as major drivers of the biogeochemical cycling that fuels the Earth, and their viruses (phages) are known to be abundant and important in microbial mortality, horizontal gene transfer, and modulating microbial metabolic output. Investigation of environmental phages has been frustrated by an inability to culture the vast majority of naturally occurring diversity coupled with the lack of robust, quantitative, culture-independent methods for studying this uncultured majority. However, for double-stranded DNA phages, a quantitative viral metagenomic sample-to-sequence workflow now exists. Here, we review these advances with special emphasis on the technical details of preparing DNA sequencing libraries for metagenomic sequencing from environmentally relevant low-input DNA samples. Library preparation steps broadly involve manipulating the sample DNA by fragmentation, end repair and adaptor ligation, size fractionation, and amplification. One critical area of future research and development is parallel advances for alternate nucleic acid types such as single-stranded DNA and RNA viruses that are also abundant in nature. Combinations of recent advances in fragmentation (e.g., acoustic shearing and tagmentation), ligation reactions (adaptor-to-template ratio reference table availability), size fractionation (non-gel-sizing), and amplification (linear amplification for deep sequencing and linker amplification protocols) enhance our ability to generate quantitatively representative metagenomic datasets from low-input DNA samples. Such datasets are already providing new insights into the role of viruses in marine systems and will continue to do so as new environments are explored and synergies and paradigms emerge from large-scale comparative analyses. © 2013 Elsevier Inc. All rights reserved.

  7. Trypsin and MALDI matrix pre-coated targets simplify sample preparation for mapping proteomic distributions within biological tissues by imaging mass spectrometry

    PubMed Central

    Zubair, Faizan; Laibinis, Paul E.; Swisher, William G.; Yang, Junhai; Spraggins, Jeffrey M.; Norris, Jeremy L.; Caprioli, Richard M.

    2017-01-01

    Prefabricated surfaces containing α-cyano-4-hydroxycinnamic acid and trypsin have been developed to facilitate enzymatic digestion of endogenous tissue proteins prior to matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS). Tissue sections are placed onto slides that were previously coated with α-cyano-4-hydroxycinnamic acid and trypsin. After incubation to promote enzymatic digestion, the tissue is analyzed by MALDI IMS to determine the spatial distribution of the tryptic fragments. The peptides detected in the MALDI IMS dataset were identified by Liquid chromatography-tandem mass spectrometry/mass spectrometry. Protein identification was further confirmed by correlating the localization of unique tryptic fragments originating from common parent proteins. Using this procedure, proteins with molecular weights as large as 300 kDa were identified and their distributions were imaged in sections of rat brain. In particular, large proteins such as myristoylated alanine-rich C-kinase substrate (29.8 kDa) and spectrin alpha chain, non-erythrocytic 1 (284 kDa) were detected that are not observed without trypsin. The pre-coated targets simplify workflow and increase sample throughput by decreasing the sample preparation time. Further, the approach allows imaging at higher spatial resolution compared with robotic spotters that apply one drop at a time. PMID:27676701

  8. Sample data processing in an additive and reproducible taxonomic workflow by using character data persistently linked to preserved individual specimens

    PubMed Central

    Kilian, Norbert; Henning, Tilo; Plitzner, Patrick; Müller, Andreas; Güntsch, Anton; Stöver, Ben C.; Müller, Kai F.; Berendsohn, Walter G.; Borsch, Thomas

    2015-01-01

    We present the model and implementation of a workflow that blazes a trail in systematic biology for the re-usability of character data (data on any kind of characters of pheno- and genotypes of organisms) and their additivity from specimen to taxon level. We take into account that any taxon characterization is based on a limited set of sampled individuals and characters, and that consequently any new individual and any new character may affect the recognition of biological entities and/or the subsequent delimitation and characterization of a taxon. Taxon concepts thus frequently change during the knowledge generation process in systematic biology. Structured character data are therefore not only needed for the knowledge generation process but also for easily adapting characterizations of taxa. We aim to facilitate the construction and reproducibility of taxon characterizations from structured character data of changing sample sets by establishing a stable and unambiguous association between each sampled individual and the data processed from it. Our workflow implementation uses the European Distributed Institute of Taxonomy Platform, a comprehensive taxonomic data management and publication environment to: (i) establish a reproducible connection between sampled individuals and all samples derived from them; (ii) stably link sample-based character data with the metadata of the respective samples; (iii) record and store structured specimen-based character data in formats allowing data exchange; (iv) reversibly assign sample metadata and character datasets to taxa in an editable classification and display them and (v) organize data exchange via standard exchange formats and enable the link between the character datasets and samples in research collections, ensuring high visibility and instant re-usability of the data. The workflow implemented will contribute to organizing the interface between phylogenetic analysis and revisionary taxonomic or monographic work. Database URL: http://campanula.e-taxonomy.net/ PMID:26424081

  9. Identification of drug metabolites in human plasma or serum integrating metabolite prediction, LC-HRMS and untargeted data processing.

    PubMed

    Jacobs, Peter L; Ridder, Lars; Ruijken, Marco; Rosing, Hilde; Jager, Nynke Gl; Beijnen, Jos H; Bas, Richard R; van Dongen, William D

    2013-09-01

    Comprehensive identification of human drug metabolites in first-in-man studies is crucial to avoid delays in later stages of drug development. We developed an efficient workflow for systematic identification of human metabolites in plasma or serum that combines metabolite prediction, high-resolution accurate mass LC-MS and MS vendor independent data processing. Retrospective evaluation of predictions for 14 (14)C-ADME studies published in the period 2007-January 2012 indicates that on average 90% of the major metabolites in human plasma can be identified by searching for accurate masses of predicted metabolites. Furthermore, the workflow can identify unexpected metabolites in the same processing run, by differential analysis of samples of drug-dosed subjects and (placebo-dosed, pre-dose or otherwise blank) control samples. To demonstrate the utility of the workflow we applied it to identify tamoxifen metabolites in serum of a breast cancer patient treated with tamoxifen. Previously published metabolites were confirmed in this study and additional metabolites were identified, two of which are discussed to illustrate the advantages of the workflow.

  10. The Importance of Experimental Design, Quality Assurance, and Control in Plant Metabolomics Experiments.

    PubMed

    Martins, Marina C M; Caldana, Camila; Wolf, Lucia Daniela; de Abreu, Luis Guilherme Furlan

    2018-01-01

    The output of metabolomics relies to a great extent upon the methods and instrumentation to identify, quantify, and access spatial information on as many metabolites as possible. However, the most modern machines and sophisticated tools for data analysis cannot compensate for inappropriate harvesting and/or sample preparation procedures that modify metabolic composition and can lead to erroneous interpretation of results. In addition, plant metabolism has a remarkable degree of complexity, and the number of identified compounds easily surpasses the number of samples in metabolomics analyses, increasing false discovery risk. These aspects pose a large challenge when carrying out plant metabolomics experiments. In this chapter, we address the importance of a proper experimental design taking into consideration preventable complications and unavoidable factors to achieve success in metabolomics analysis. We also focus on quality control and standardized procedures during the metabolomics workflow.

  11. Direct identification of bacteria from charcoal-containing blood culture bottles using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry.

    PubMed

    Wüppenhorst, N; Consoir, C; Lörch, D; Schneider, C

    2012-10-01

    Several protocols for direct matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF MS) from positive blood cultures are currently used to speed up the diagnostic process of bacteraemia. Identification rates are high and results are accurate for the BACTEC™ system and for charcoal-free bottles. Only a few studies have evaluated protocols for charcoal-containing BacT/ALERT bottles reaching substantially lower identification rates. We established a new protocol for sample preparation from aerobic and anaerobic positive charcoal-containing BacT/ALERT blood culture bottles and measured the protein profiles (n = 167). Then, we integrated this protocol in the routine workflow of our laboratory (n = 212). During the establishment of our protocol, 74.3 % of bacteria were correctly identified to the species level, in 23.4 %, no result and in 2.4 %, a false identification were obtained. Reliable criteria for correct species identification were a score value ≥1.400 and a best match on rank 1-3 of the same species. Identification rates during routine workflow were 77.8 % for correct identification, 20.8 % for not identified samples and 1.4 % for discordant identification. In conclusion, our results indicate that MALDI-TOF MS is possible, even from charcoal-containing blood cultures. Reliable criteria for correct species identification are a score value ≥1.400 and a best match on rank 1-3 of a single species.

  12. Streamlining Workflow for Endovascular Mechanical Thrombectomy: Lessons Learned from a Comprehensive Stroke Center.

    PubMed

    Wang, Hongjin; Thevathasan, Arthur; Dowling, Richard; Bush, Steven; Mitchell, Peter; Yan, Bernard

    2017-08-01

    Recently, 5 randomized controlled trials confirmed the superiority of endovascular mechanical thrombectomy (EMT) to intravenous thrombolysis in acute ischemic stroke with large-vessel occlusion. The implication is that our health systems would witness an increasing number of patients treated with EMT. However, in-hospital delays, leading to increased time to reperfusion, are associated with poor clinical outcomes. This review outlines the in-hospital workflow of the treatment of acute ischemic stroke at a comprehensive stroke center and the lessons learned in reduction of in-hospital delays. The in-hospital workflow for acute ischemic stroke was described from prehospital notification to femoral arterial puncture in preparation for EMT. Systematic review of literature was also performed with PubMed. The implementation of workflow streamlining could result in reduction of in-hospital time delays for patients who were eligible for EMT. In particular, time-critical measures, including prehospital notification, the transfer of patients from door to computed tomography (CT) room, initiation of intravenous thrombolysis in the CT room, and the mobilization of neurointervention team in parallel with thrombolysis, all contributed to reduction in time delays. We have identified issues resulting in in-hospital time delays and have reported possible solutions to improve workflow efficiencies. We believe that these measures may help stroke centers initiate an EMT service for eligible patients. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  13. High throughput workflow for coacervate formation and characterization in shampoo systems.

    PubMed

    Kalantar, T H; Tucker, C J; Zalusky, A S; Boomgaard, T A; Wilson, B E; Ladika, M; Jordan, S L; Li, W K; Zhang, X; Goh, C G

    2007-01-01

    Cationic cellulosic polymers find wide utility as benefit agents in shampoo. Deposition of these polymers onto hair has been shown to mend split-ends, improve appearance and wet combing, as well as provide controlled delivery of insoluble actives. The deposition is thought to be enhanced by the formation of a polymer/surfactant complex that phase-separates from the bulk solution upon dilution. A standard characterization method has been developed to characterize the coacervate formation upon dilution, but the test is time and material prohibitive. We have developed a semi-automated high throughput workflow to characterize the coacervate-forming behavior of different shampoo formulations. A procedure that allows testing of real use shampoo dilutions without first formulating a complete shampoo was identified. This procedure was adapted to a Tecan liquid handler by optimizing the parameters for liquid dispensing as well as for mixing. The high throughput workflow enabled preparation and testing of hundreds of formulations with different types and levels of cationic cellulosic polymers and surfactants, and for each formulation a haze diagram was constructed. Optimal formulations and their dilutions that give substantial coacervate formation (determined by haze measurements) were identified. Results from this high throughput workflow were shown to reproduce standard haze and bench-top turbidity measurements, and this workflow has the advantages of using less material and allowing more variables to be tested with significant time savings.

  14. Workflow and Electronic Health Records in Small Medical Practices

    PubMed Central

    Ramaiah, Mala; Subrahmanian, Eswaran; Sriram, Ram D; Lide, Bettijoyce B

    2012-01-01

    This paper analyzes the workflow and implementation of electronic health record (EHR) systems across different functions in small physician offices. We characterize the differences in the offices based on the levels of computerization in terms of workflow, sources of time delay, and barriers to using EHR systems to support the entire workflow. The study was based on a combination of questionnaires, interviews, in situ observations, and data collection efforts. This study was not intended to be a full-scale time-and-motion study with precise measurements but was intended to provide an overview of the potential sources of delays while performing office tasks. The study follows an interpretive model of case studies rather than a large-sample statistical survey of practices. To identify time-consuming tasks, workflow maps were created based on the aggregated data from the offices. The results from the study show that specialty physicians are more favorable toward adopting EHR systems than primary care physicians are. The barriers to adoption of EHR systems by primary care physicians can be attributed to the complex workflows that exist in primary care physician offices, leading to nonstandardized workflow structures and practices. Also, primary care physicians would benefit more from EHR systems if the systems could interact with external entities. PMID:22737096

  15. Forensic applications of supercritical fluid chromatography - mass spectrometry.

    PubMed

    Pauk, Volodymyr; Lemr, Karel

    2018-06-01

    Achievements of supercritical fluid chromatography with mass spectrometric detection made in the field of forensic science during the last decade are reviewed. The main topics include analysis of traditional drugs of abuse (e.g. cannabis, methamphetamine) as well as new psychoactive substances (synthetic cannabinoids, cathinones and phenethylamines), doping agents (anabolic steroids, stimulants, diuretics, analgesics etc.) and chemical warfare agents. Control of food authenticity, detection of adulteration and identification of toxic substances in food are also pointed out. Main aspects of an analytical workflow, such as sample preparation, separation and detection are discussed. A special attention is paid to the performance characteristics and validation parameters of supercritical fluid chromatography-mass spectrometric methods in comparison with other separation techniques. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The Development of a High-Throughput/Combinatorial Workflow for the Study of Porous Polymer Networks

    DTIC Science & Technology

    2012-04-05

    poragen composition , poragen level, and cure temperature. A total of 216 unique compositions were prepared. Changes in opacity of the blends as they cured...allowed for the identification of compositional variables and process variables that enabled the production of porous networks. Keywords: high...in polymer network cross-link density,poragen composition , poragen level, and cure temperature. A total of 216 unique compositions were prepared

  17. Single-Center Experience with a Targeted Next Generation Sequencing Assay for Assessment of Relevant Somatic Alterations in Solid Tumors.

    PubMed

    Paasinen-Sohns, Aino; Koelzer, Viktor H; Frank, Angela; Schafroth, Julian; Gisler, Aline; Sachs, Melanie; Graber, Anne; Rothschild, Sacha I; Wicki, Andreas; Cathomas, Gieri; Mertz, Kirsten D

    2017-03-01

    Companion diagnostics rely on genomic testing of molecular alterations to enable effective cancer treatment. Here we report the clinical application and validation of the Oncomine Focus Assay (OFA), an integrated, commercially available next-generation sequencing (NGS) assay for the rapid and simultaneous detection of single nucleotide variants, short insertions and deletions, copy number variations, and gene rearrangements in 52 cancer genes with therapeutic relevance. Two independent patient cohorts were investigated to define the workflow, turnaround times, feasibility, and reliability of OFA targeted sequencing in clinical application and using archival material. Cohort I consisted of 59 diagnostic clinical samples from the daily routine submitted for molecular testing over a 4-month time period. Cohort II consisted of 39 archival melanoma samples that were up to 15years old. Libraries were prepared from isolated nucleic acids and sequenced on the Ion Torrent PGM sequencer. Sequencing datasets were analyzed using the Ion Reporter software. Genomic alterations were identified and validated by orthogonal conventional assays including pyrosequencing and immunohistochemistry. Sequencing results of both cohorts, including archival formalin-fixed, paraffin-embedded material stored up to 15years, were consistent with published variant frequencies. A concordance of 100% between established assays and OFA targeted NGS was observed. The OFA workflow enabled a turnaround of 3½ days. Taken together, OFA was found to be a convenient tool for fast, reliable, broadly applicable and cost-effective targeted NGS of tumor samples in routine diagnostics. Thus, OFA has strong potential to become an important asset for precision oncology. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Development of a Rapid Point-of-Use DNA Test for the Screening of Genuity® Roundup Ready 2 Yield® Soybean in Seed Samples.

    PubMed

    Chandu, Dilip; Paul, Sudakshina; Parker, Mathew; Dudin, Yelena; King-Sitzes, Jennifer; Perez, Tim; Mittanck, Don W; Shah, Manali; Glenn, Kevin C; Piepenburg, Olaf

    2016-01-01

    Testing for the presence of genetically modified material in seed samples is of critical importance for all stakeholders in the agricultural industry, including growers, seed manufacturers, and regulatory bodies. While rapid antibody-based testing for the transgenic protein has fulfilled this need in the past, the introduction of new variants of a given transgene demands new diagnostic regimen that allows distinguishing different traits at the nucleic acid level. Although such molecular tests can be performed by PCR in the laboratory, their requirement for expensive equipment and sophisticated operation have prevented its uptake in point-of-use applications. A recently developed isothermal DNA amplification technique, recombinase polymerase amplification (RPA), combines simple sample preparation and amplification work-flow procedures with the use of minimal detection equipment in real time. Here, we report the development of a highly sensitive and specific RPA-based detection system for Genuity Roundup Ready 2 Yield (RR2Y) material in soybean (Glycine max) seed samples and present the results of studies applying the method in both laboratory and field-type settings.

  19. recount workflow: Accessing over 70,000 human RNA-seq samples with Bioconductor

    PubMed Central

    Collado-Torres, Leonardo; Nellore, Abhinav; Jaffe, Andrew E.

    2017-01-01

    The recount2 resource is composed of over 70,000 uniformly processed human RNA-seq samples spanning TCGA and SRA, including GTEx. The processed data can be accessed via the recount2 website and the recountBioconductor package. This workflow explains in detail how to use the recountpackage and how to integrate it with other Bioconductor packages for several analyses that can be carried out with the recount2 resource. In particular, we describe how the coverage count matrices were computed in recount2 as well as different ways of obtaining public metadata, which can facilitate downstream analyses. Step-by-step directions show how to do a gene-level differential expression analysis, visualize base-level genome coverage data, and perform an analyses at multiple feature levels. This workflow thus provides further information to understand the data in recount2 and a compendium of R code to use the data. PMID:29043067

  20. Characterising and correcting batch variation in an automated direct infusion mass spectrometry (DIMS) metabolomics workflow.

    PubMed

    Kirwan, J A; Broadhurst, D I; Davidson, R L; Viant, M R

    2013-06-01

    Direct infusion mass spectrometry (DIMS)-based untargeted metabolomics measures many hundreds of metabolites in a single experiment. While every effort is made to reduce within-experiment analytical variation in untargeted metabolomics, unavoidable sources of measurement error are introduced. This is particularly true for large-scale multi-batch experiments, necessitating the development of robust workflows that minimise batch-to-batch variation. Here, we conducted a purpose-designed, eight-batch DIMS metabolomics study using nanoelectrospray (nESI) Fourier transform ion cyclotron resonance mass spectrometric analyses of mammalian heart extracts. First, we characterised the intrinsic analytical variation of this approach to determine whether our existing workflows are fit for purpose when applied to a multi-batch investigation. Batch-to-batch variation was readily observed across the 7-day experiment, both in terms of its absolute measurement using quality control (QC) and biological replicate samples, as well as its adverse impact on our ability to discover significant metabolic information within the data. Subsequently, we developed and implemented a computational workflow that includes total-ion-current filtering, QC-robust spline batch correction and spectral cleaning, and provide conclusive evidence that this workflow reduces analytical variation and increases the proportion of significant peaks. We report an overall analytical precision of 15.9%, measured as the median relative standard deviation (RSD) for the technical replicates of the biological samples, across eight batches and 7 days of measurements. When compared against the FDA guidelines for biomarker studies, which specify an RSD of <20% as an acceptable level of precision, we conclude that our new workflows are fit for purpose for large-scale, high-throughput nESI DIMS metabolomics studies.

  1. Toward reliable biomarker signatures in the age of liquid biopsies - how to standardize the small RNA-Seq workflow

    PubMed Central

    Buschmann, Dominik; Haberberger, Anna; Kirchner, Benedikt; Spornraft, Melanie; Riedmaier, Irmgard; Schelling, Gustav; Pfaffl, Michael W.

    2016-01-01

    Small RNA-Seq has emerged as a powerful tool in transcriptomics, gene expression profiling and biomarker discovery. Sequencing cell-free nucleic acids, particularly microRNA (miRNA), from liquid biopsies additionally provides exciting possibilities for molecular diagnostics, and might help establish disease-specific biomarker signatures. The complexity of the small RNA-Seq workflow, however, bears challenges and biases that researchers need to be aware of in order to generate high-quality data. Rigorous standardization and extensive validation are required to guarantee reliability, reproducibility and comparability of research findings. Hypotheses based on flawed experimental conditions can be inconsistent and even misleading. Comparable to the well-established MIQE guidelines for qPCR experiments, this work aims at establishing guidelines for experimental design and pre-analytical sample processing, standardization of library preparation and sequencing reactions, as well as facilitating data analysis. We highlight bottlenecks in small RNA-Seq experiments, point out the importance of stringent quality control and validation, and provide a primer for differential expression analysis and biomarker discovery. Following our recommendations will encourage better sequencing practice, increase experimental transparency and lead to more reproducible small RNA-Seq results. This will ultimately enhance the validity of biomarker signatures, and allow reliable and robust clinical predictions. PMID:27317696

  2. Combined Multidimensional Microscopy as a Histopathology Imaging Tool.

    PubMed

    Shami, Gerald J; Cheng, Delfine; Braet, Filip

    2017-02-01

    Herein, we present a highly versatile bioimaging workflow for the multidimensional imaging of biological structures across vastly different length scales. Such an approach allows for the optimised preparation of samples in one go for consecutive X-ray micro-computed tomography, bright-field light microscopy and backscattered scanning electron microscopy, thus, facilitating the disclosure of combined structural information ranging from the gross tissue or cellular level, down to the nanometre scale. In this current study, we characterize various aspects of the hepatic vasculature, ranging from such large vessels as branches of the hepatic portal vein and hepatic artery, down to the smallest sinusoidal capillaries. By employing high-resolution backscattered scanning electron microscopy, we were able to further characterize the subcellular features of a range of hepatic sinusoidal cells including, liver sinusoidal endothelial cells, pit cells and Kupffer cells. Above all, we demonstrate the capabilities of a specimen manipulation workflow that can be applied and adapted to a plethora of functional and structural investigations and experimental models. Such an approach harnesses the fundamental advantages inherent to the various imaging modalities presented herein, and when combined, offers information not currently available by any single imaging platform. J. Cell. Physiol. 232: 249-256, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Toward Streamlined Identification of Dioxin-like Compounds in Environmental Samples through Integration of Suspension Bioassay.

    PubMed

    Xiao, Hongxia; Brinkmann, Markus; Thalmann, Beat; Schiwy, Andreas; Große Brinkhaus, Sigrid; Achten, Christine; Eichbaum, Kathrin; Gembé, Carolin; Seiler, Thomas-Benjamin; Hollert, Henner

    2017-03-21

    Effect-directed analysis (EDA) is a powerful strategy to identify biologically active compounds in environmental samples. However, in current EDA studies, fractionation and handling procedures are laborious, consist of multiple evaporation steps, and thus bear the risk of contamination and decreased recoveries of the target compounds. The low resulting throughput has been one of the major bottlenecks of EDA. Here, we propose a high-throughput EDA (HT-EDA) work-flow combining reversed phase high-performance liquid chromatography fractionation of samples into 96-well microplates, followed by toxicity assessment in the micro-EROD bioassay with the wild-type rat hepatoma H4IIE cells, and chemical analysis of bioactive fractions. The approach was evaluated using single substances, binary mixtures, and extracts of sediment samples collected at the Three Gorges Reservoir, Yangtze River, China, as well as the rivers Rhine and Elbe, Germany. Selected bioactive fractions were analyzed by highly sensitive gas chromatography-atmospheric pressure laser ionization-time-of-flight-mass spectrometry. In addition, we optimized the work-flow by seeding previously adapted suspension-cultured H4IIE cells directly into the microplate used for fractionation, which makes any transfers of fractionated samples unnecessary. The proposed HT-EDA work-flow simplifies the procedure for wider application in ecotoxicology and environmental routine programs.

  4. Protein Chips Compatible with MALDI Mass Spectrometry Prepared by Ambient Ion Landing.

    PubMed

    Pompach, Petr; Benada, Oldřich; Rosůlek, Michal; Darebná, Petra; Hausner, Jiří; Růžička, Viktor; Volný, Michael; Novák, Petr

    2016-09-06

    We present a technology that allows the preparation of matrix-assisted laser desorption/ionization (MALDI)-compatible protein chips by ambient ion landing of proteins and successive utilization of the resulting protein chips for the development of bioanalytical assays. These assays are based on the interaction between the immobilized protein and the sampled analyte directly on the protein chip and subsequent in situ analysis by MALDI mass spectrometry. The electrosprayed proteins are immobilized on dry metal and metal oxide surfaces, which are nonreactive under normal conditions. The ion landing of electrosprayed protein molecules is performed under atmospheric pressure by an automated ion landing apparatus that can manufacture protein chips with a predefined array of sample positions or any other geometry of choice. The protein chips prepared by this technique are fully compatible with MALDI ionization because the metal-based substrates are conductive and durable enough to be used directly as MALDI plates. Compared to other materials, the nonreactive surfaces show minimal nonspecific interactions with chemical species in the investigated sample and are thus an ideal substrate for selective protein chips. Three types of protein chips were used in this report to demonstrate the bioanalytical applications of ambient ion landing. The protein chips with immobilized proteolytic enzymes showed the usefulness for fast in situ peptide MALDI sequencing; the lectin-based protein chips showed the ability to enrich glycopeptides from complex mixtures with subsequent MALDI analysis, and the protein chips with immobilized antibodies were used for a novel immunoMALDI workflow that allowed the enrichment of antigens from the serum followed by highly specific MALDI detection.

  5. Direct infusion mass spectrometry metabolomics dataset: a benchmark for data processing and quality control

    PubMed Central

    Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R

    2014-01-01

    Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770

  6. Galaxy-M: a Galaxy workflow for processing and analyzing direct infusion and liquid chromatography mass spectrometry-based metabolomics data.

    PubMed

    Davidson, Robert L; Weber, Ralf J M; Liu, Haoyu; Sharma-Oates, Archana; Viant, Mark R

    2016-01-01

    Metabolomics is increasingly recognized as an invaluable tool in the biological, medical and environmental sciences yet lags behind the methodological maturity of other omics fields. To achieve its full potential, including the integration of multiple omics modalities, the accessibility, standardization and reproducibility of computational metabolomics tools must be improved significantly. Here we present our end-to-end mass spectrometry metabolomics workflow in the widely used platform, Galaxy. Named Galaxy-M, our workflow has been developed for both direct infusion mass spectrometry (DIMS) and liquid chromatography mass spectrometry (LC-MS) metabolomics. The range of tools presented spans from processing of raw data, e.g. peak picking and alignment, through data cleansing, e.g. missing value imputation, to preparation for statistical analysis, e.g. normalization and scaling, and principal components analysis (PCA) with associated statistical evaluation. We demonstrate the ease of using these Galaxy workflows via the analysis of DIMS and LC-MS datasets, and provide PCA scores and associated statistics to help other users to ensure that they can accurately repeat the processing and analysis of these two datasets. Galaxy and data are all provided pre-installed in a virtual machine (VM) that can be downloaded from the GigaDB repository. Additionally, source code, executables and installation instructions are available from GitHub. The Galaxy platform has enabled us to produce an easily accessible and reproducible computational metabolomics workflow. More tools could be added by the community to expand its functionality. We recommend that Galaxy-M workflow files are included within the supplementary information of publications, enabling metabolomics studies to achieve greater reproducibility.

  7. Optimization and validation of sample preparation for metagenomic sequencing of viruses in clinical samples.

    PubMed

    Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael

    2017-08-08

    Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.

  8. Hematocrit-Independent Quantitation of Stimulants in Dried Blood Spots: Pipet versus Microfluidic-Based Volumetric Sampling Coupled with Automated Flow-Through Desorption and Online Solid Phase Extraction-LC-MS/MS Bioanalysis.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-07-05

    A workflow overcoming microsample collection issues and hematocrit (HCT)-related bias would facilitate more widespread use of dried blood spots (DBS). This report describes comparative results between the use of a pipet and a microfluidic-based sampling device for the creation of volumetric DBS. Both approaches were successfully coupled to HCT-independent, fully automated sample preparation and online liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) analysis allowing detection of five stimulants in finger prick blood. Reproducible, selective, accurate, and precise responses meeting generally accepted regulated bioanalysis guidelines were observed over the range of 5-1000 ng/mL whole blood. The applied heated flow-through solvent desorption of the entire spot and online solid phase extraction (SPE) procedure were unaffected by the blood's HCT value within the tested range of 28.0-61.5% HCT. Enhanced stability for mephedrone on DBS compared to liquid whole blood was observed. Finger prick blood samples were collected using both volumetric sampling approaches over a time course of 25 h after intake of a single oral dose of phentermine. A pharmacokinetic curve for the incurred phentermine was successfully produced using the described validated method. These results suggest that either volumetric sample collection method may be amenable to field-use followed by fully automated, HCT-independent DBS-SPE-LC-MS/MS bioanalysis for the quantitation of these representative controlled substances. Analytical data from DBS prepared with a pipet and microfluidic-based sampling devices were comparable, but the latter is easier to operate, making this approach more suitable for sample collection by unskilled persons.

  9. Ambient Mass Spectrometry in Cancer Research.

    PubMed

    Takats, Z; Strittmatter, N; McKenzie, J S

    2017-01-01

    Ambient ionization mass spectrometry was developed as a sample preparation-free alternative to traditional MS-based workflows. Desorption electrospray ionization (DESI)-MS methods were demonstrated to allow the direct analysis of a broad range of samples including unaltered biological tissue specimens. In contrast to this advantageous feature, nowadays DESI-MS is almost exclusively used for sample preparation intensive mass spectrometric imaging (MSI) in the area of cancer research. As an alternative to MALDI, DESI-MSI offers matrix deposition-free experiment with improved signal in the lower (<500m/z) range. DESI-MSI enables the spatial mapping of tumor metabolism and has been broadly demonstrated to offer an alternative to frozen section histology for intraoperative tissue identification and surgical margin assessment. Rapid evaporative ionization mass spectrometry (REIMS) was developed exclusively for the latter purpose by the direct combination of electrosurgical devices and mass spectrometry. In case of the REIMS technology, aerosol particles produced by electrosurgical dissection are subjected to MS analysis, providing spectral information on the structural lipid composition of tissues. REIMS technology was demonstrated to give real-time information on the histological nature of tissues being dissected, deeming it an ideal tool for intraoperative tissue identification including surgical margin control. More recently, the method has also been used for the rapid lipidomic phenotyping of cancer cell lines as it was demonstrated in case of the NCI-60 cell line collection. © 2017 Elsevier Inc. All rights reserved.

  10. Review of sample preparation strategies for MS-based metabolomic studies in industrial biotechnology.

    PubMed

    Causon, Tim J; Hann, Stephan

    2016-09-28

    Fermentation and cell culture biotechnology in the form of so-called "cell factories" now play an increasingly significant role in production of both large (e.g. proteins, biopharmaceuticals) and small organic molecules for a wide variety of applications. However, associated metabolic engineering optimisation processes relying on genetic modification of organisms used in cell factories, or alteration of production conditions remain a challenging undertaking for improving the final yield and quality of cell factory products. In addition to genomic, transcriptomic and proteomic workflows, analytical metabolomics continues to play a critical role in studying detailed aspects of critical pathways (e.g. via targeted quantification of metabolites), identification of biosynthetic intermediates, and also for phenotype differentiation and the elucidation of previously unknown pathways (e.g. via non-targeted strategies). However, the diversity of primary and secondary metabolites and the broad concentration ranges encompassed during typical biotechnological processes means that simultaneous extraction and robust analytical determination of all parts of interest of the metabolome is effectively impossible. As the integration of metabolome data with transcriptome and proteome data is an essential goal of both targeted and non-targeted methods addressing production optimisation goals, additional sample preparation steps beyond necessary sampling, quenching and extraction protocols including clean-up, analyte enrichment, and derivatisation are important considerations for some classes of metabolites, especially those present in low concentrations or exhibiting poor stability. This contribution critically assesses the potential of current sample preparation strategies applied in metabolomic studies of industrially-relevant cell factory organisms using mass spectrometry-based platforms primarily coupled to liquid-phase sample introduction (i.e. flow injection, liquid chromatography, or capillary electrophoresis). Particular focus is placed on the selectivity and degree of enrichment attainable, as well as demands of speed, absolute quantification, robustness and, ultimately, consideration of fully-integrated bioanalytical solutions to optimise sample handling and throughput. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. The ABCs of PDFs.

    ERIC Educational Resources Information Center

    Adler, Steve

    2000-01-01

    Explains the use of Adobe Acrobat's Portable Document Format (PDF) for school Web sites and Intranets. Explains the PDF workflow; components for Web-based PDF delivery, including the Web server, preparing content of the PDF files, and the browser; incorporating PDFs into the Web site; incorporating multimedia; and software. (LRW)

  12. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons.

    PubMed

    Post, Harm; Penning, Renske; Fitzpatrick, Martin A; Garrigues, Luc B; Wu, W; MacGillavry, Harold D; Hoogenraad, Casper C; Heck, Albert J R; Altelaar, A F Maarten

    2017-02-03

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC-MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts, placing new demands on enrichment protocols to make them less labor-intensive, more sensitive, and less prone to variability. Here we assessed an automated enrichment protocol using Fe(III)-IMAC cartridges on an AssayMAP Bravo platform to meet these demands. The automated Fe(III)-IMAC-based enrichment workflow proved to be more effective when compared to a TiO 2 -based enrichment using the same platform and a manual Ti(IV)-IMAC-based enrichment workflow. As initial samples, a dilution series of both human HeLa cell and primary rat hippocampal neuron lysates was used, going down to 0.1 μg of peptide starting material. The optimized workflow proved to be efficient, sensitive, and reproducible, identifying, localizing, and quantifying thousands of phosphosites from just micrograms of starting material. To further test the automated workflow in genuine biological applications, we monitored EGF-induced signaling in hippocampal neurons, starting with only 200 000 primary cells, resulting in ∼50 μg of protein material. This revealed a comprehensive phosphoproteome, showing regulation of multiple members of the MAPK pathway and reduced phosphorylation status of two glutamate receptors involved in synaptic plasticity.

  13. MS/MS library facilitated MRM quantification of native peptides prepared by denaturing ultrafiltration

    PubMed Central

    2012-01-01

    Naturally occurring native peptides provide important information about physiological states of an organism and its changes in disease conditions but protocols and methods for assessing their abundance are not well-developed. In this paper, we describe a simple procedure for the quantification of non-tryptic peptides in body fluids. The workflow includes an enrichment step followed by two-dimensional fractionation of native peptides and MS/MS data management facilitating the design and validation of LC- MRM MS assays. The added value of the workflow is demonstrated in the development of a triplex LC-MRM MS assay used for quantification of peptides potentially associated with the progression of liver disease to hepatocellular carcinoma. PMID:22304756

  14. Single-Cell mRNA-Seq Using the Fluidigm C1 System and Integrated Fluidics Circuits.

    PubMed

    Gong, Haibiao; Do, Devin; Ramakrishnan, Ramesh

    2018-01-01

    Single-cell mRNA-seq is a valuable tool to dissect expression profiles and to understand the regulatory network of genes. Microfluidics is well suited for single-cell analysis owing both to the small volume of the reaction chambers and easiness of automation. Here we describe the workflow of single-cell mRNA-seq using C1 IFC, which can isolate and process up to 96 cells. Both on-chip procedure (lysis, reverse transcription, and preamplification PCR) and off-chip sequencing library preparation protocols are described. The workflow generates full-length mRNA information, which is more valuable compared to 3' end counting method for many applications.

  15. Data processing workflows from low-cost digital survey to various applications: three case studies of Chinese historic architecture

    NASA Astrophysics Data System (ADS)

    Sun, Z.; Cao, Y. K.

    2015-08-01

    The paper focuses on the versatility of data processing workflows ranging from BIM-based survey to structural analysis and reverse modeling. In China nowadays, a large number of historic architecture are in need of restoration, reinforcement and renovation. But the architects are not prepared for the conversion from the booming AEC industry to architectural preservation. As surveyors working with architects in such projects, we have to develop efficient low-cost digital survey workflow robust to various types of architecture, and to process the captured data for architects. Although laser scanning yields high accuracy in architectural heritage documentation and the workflow is quite straightforward, the cost and portability hinder it from being used in projects where budget and efficiency are of prime concern. We integrate Structure from Motion techniques with UAV and total station in data acquisition. The captured data is processed for various purposes illustrated with three case studies: the first one is as-built BIM for a historic building based on registered point clouds according to Ground Control Points; The second one concerns structural analysis for a damaged bridge using Finite Element Analysis software; The last one relates to parametric automated feature extraction from captured point clouds for reverse modeling and fabrication.

  16. High precision quantification of human plasma proteins using the automated SISCAPA Immuno-MS workflow.

    PubMed

    Razavi, Morteza; Leigh Anderson, N; Pope, Matthew E; Yip, Richard; Pearson, Terry W

    2016-09-25

    Efficient robotic workflows for trypsin digestion of human plasma and subsequent antibody-mediated peptide enrichment (the SISCAPA method) were developed with the goal of improving assay precision and throughput for multiplexed protein biomarker quantification. First, an 'addition only' tryptic digestion protocol was simplified from classical methods, eliminating the need for sample cleanup, while improving reproducibility, scalability and cost. Second, methods were developed to allow multiplexed enrichment and quantification of peptide surrogates of protein biomarkers representing a very broad range of concentrations and widely different molecular masses in human plasma. The total workflow coefficients of variation (including the 3 sequential steps of digestion, SISCAPA peptide enrichment and mass spectrometric analysis) for 5 proteotypic peptides measured in 6 replicates of each of 6 different samples repeated over 6 days averaged 3.4% within-run and 4.3% across all runs. An experiment to identify sources of variation in the workflow demonstrated that MRM measurement and tryptic digestion steps each had average CVs of ∼2.7%. Because of the high purity of the peptide analytes enriched by antibody capture, the liquid chromatography step is minimized and in some cases eliminated altogether, enabling throughput levels consistent with requirements of large biomarker and clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A comprehensive quality control workflow for paired tumor-normal NGS experiments.

    PubMed

    Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc

    2017-06-01

    Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. The Design of a Quantitative Western Blot Experiment

    PubMed Central

    Taylor, Sean C.; Posch, Anton

    2014-01-01

    Western blotting is a technique that has been in practice for more than three decades that began as a means of detecting a protein target in a complex sample. Although there have been significant advances in both the imaging and reagent technologies to improve sensitivity, dynamic range of detection, and the applicability of multiplexed target detection, the basic technique has remained essentially unchanged. In the past, western blotting was used simply to detect a specific target protein in a complex mixture, but now journal editors and reviewers are requesting the quantitative interpretation of western blot data in terms of fold changes in protein expression between samples. The calculations are based on the differential densitometry of the associated chemiluminescent and/or fluorescent signals from the blots and this now requires a fundamental shift in the experimental methodology, acquisition, and interpretation of the data. We have recently published an updated approach to produce quantitative densitometric data from western blots (Taylor et al., 2013) and here we summarize the complete western blot workflow with a focus on sample preparation and data analysis for quantitative western blotting. PMID:24738055

  19. Importing, Working With, and Sharing Microstructural Data in the StraboSpot Digital Data System, Including an Example Dataset from the Pilbara Craton, Western Australia.

    NASA Astrophysics Data System (ADS)

    Roberts, N.; Cunningham, H.; Snell, A.; Newman, J.; Tikoff, B.; Chatzaras, V.; Walker, J. D.; Williams, R. T.

    2017-12-01

    There is currently no repository where a geologist can survey microstructural datasets that have been collected from a specific field area or deformation experiment. New development of the StraboSpot digital data system provides a such a repository as well as visualization and analysis tools. StraboSpot is a graph database that allows field geologists to share primary data and develop new types of scientific questions. The database can be accessed through: 1) a field-based mobile application that runs on iOS and Android mobile devices; and 2) a desktop system. We are expanding StraboSpot to include the handling of a variety of microstructural data types. Presented here is the detailed vocabulary and logic used for the input of microstructural data, and how this system operates with the anticipated workflow of users. Microstructural data include observations and interpretations from photomicrographs, scanning electron microscope images, electron backscatter diffraction, and transmission electron microscopy data. The workflow for importing microstructural data into StraboSpot is organized into the following tabs: Images, Mineralogy & Composition; Sedimentary; Igneous; Metamorphic; Fault Rocks; Grain size & configuration; Crystallographic Preferred Orientation; Reactions; Geochronology; Relationships; and Interpretations. Both the sample and the thin sections are also spots. For the sample spot, the user can specify whether a sample is experimental or natural; natural samples are inherently linked to their field context. For the thin section (sub-sample) spot, the user can select between different options for sample preparation, geometry, and methods. A universal framework for thin section orientation is given, which allows users to overlay different microscope images of the same area and keeps georeferenced orientation. We provide an example dataset of field and microstructural data from the Mt Edgar dome, a granitic complex in the Paleoarchean East Pilbara craton, Australia. StraboSpot provides a single place for georeferenced geologic data at every spatial scale, in which data are interconnected. Incorporating microstructural data into an open-access platform will give field and experimental geologists a library of microstructural data across a range of tectonic and experimental contexts.

  20. Low-Cost, High-Throughput Sequencing of DNA Assemblies Using a Highly Multiplexed Nextera Process.

    PubMed

    Shapland, Elaine B; Holmes, Victor; Reeves, Christopher D; Sorokin, Elena; Durot, Maxime; Platt, Darren; Allen, Christopher; Dean, Jed; Serber, Zach; Newman, Jack; Chandran, Sunil

    2015-07-17

    In recent years, next-generation sequencing (NGS) technology has greatly reduced the cost of sequencing whole genomes, whereas the cost of sequence verification of plasmids via Sanger sequencing has remained high. Consequently, industrial-scale strain engineers either limit the number of designs or take short cuts in quality control. Here, we show that over 4000 plasmids can be completely sequenced in one Illumina MiSeq run for less than $3 each (15× coverage), which is a 20-fold reduction over using Sanger sequencing (2× coverage). We reduced the volume of the Nextera tagmentation reaction by 100-fold and developed an automated workflow to prepare thousands of samples for sequencing. We also developed software to track the samples and associated sequence data and to rapidly identify correctly assembled constructs having the fewest defects. As DNA synthesis and assembly become a centralized commodity, this NGS quality control (QC) process will be essential to groups operating high-throughput pipelines for DNA construction.

  1. Precise pooling and dispensing of microfluidic droplets towards micro- to macro-world interfacing

    PubMed Central

    Brouzes, Eric; Carniol, April; Bakowski, Tomasz; Strey, Helmut H.

    2014-01-01

    Droplet microfluidics possesses unique properties such as the ability to carry out multiple independent reactions without dispersion of samples in microchannels. We seek to extend the use of droplet microfluidics to a new range of applications by enabling its integration into workflows based on traditional technologies, such as microtiter plates. Our strategy consists in developing a novel method to manipulate, pool and deliver a precise number of microfluidic droplets. To this aim, we present a basic module that combines droplet trapping with an on-chip valve. We quantitatively analyzed the trapping efficiency of the basic module in order to optimize its design. We also demonstrate the integration of the basic module into a multiplex device that can deliver 8 droplets at every cycle. This device will have a great impact in low throughput droplet applications that necessitate interfacing with macroscale technologies. The micro- to macro- interface is particularly critical in microfluidic applications that aim at sample preparation and has not been rigorously addressed in this context. PMID:25485102

  2. A Method for Label-Free, Differential Top-Down Proteomics.

    PubMed

    Ntai, Ioanna; Toby, Timothy K; LeDuc, Richard D; Kelleher, Neil L

    2016-01-01

    Biomarker discovery in the translational research has heavily relied on labeled and label-free quantitative bottom-up proteomics. Here, we describe a new approach to biomarker studies that utilizes high-throughput top-down proteomics and is the first to offer whole protein characterization and relative quantitation within the same experiment. Using yeast as a model, we report procedures for a label-free approach to quantify the relative abundance of intact proteins ranging from 0 to 30 kDa in two different states. In this chapter, we describe the integrated methodology for the large-scale profiling and quantitation of the intact proteome by liquid chromatography-mass spectrometry (LC-MS) without the need for metabolic or chemical labeling. This recent advance for quantitative top-down proteomics is best implemented with a robust and highly controlled sample preparation workflow before data acquisition on a high-resolution mass spectrometer, and the application of a hierarchical linear statistical model to account for the multiple levels of variance contained in quantitative proteomic comparisons of samples for basic and clinical research.

  3. Correction of stain variations in nuclear refractive index of clinical histology specimens

    PubMed Central

    Uttam, Shikhar; Bista, Rajan K.; Hartman, Douglas J.; Brand, Randall E.; Liu, Yang

    2011-01-01

    For any technique to be adopted into a clinical setting, it is imperative that it seamlessly integrates with well-established clinical diagnostic workflow. We recently developed an optical microscopy technique—spatial-domain low-coherence quantitative phase microscopy (SL-QPM) that can extract the refractive index of the cell nucleus from the standard histology specimens on glass slides prepared via standard clinical protocols. This technique has shown great potential in detecting cancer with a better sensitivity than conventional pathology. A major hurdle in the clinical translation of this technique is the intrinsic variation among staining agents used in histology specimens, which limits the accuracy of refractive index measurements of clinical samples. In this paper, we present a simple and easily generalizable method to remove the effect of variations in staining levels on nuclear refractive index obtained with SL-QPM. We illustrate the efficacy of our correction method by applying it to variously stained histology samples from animal model and clinical specimens. PMID:22112118

  4. Analytical quality assurance in veterinary drug residue analysis methods: matrix effects determination and monitoring for sulfonamides analysis.

    PubMed

    Hoff, Rodrigo Barcellos; Rübensam, Gabriel; Jank, Louise; Barreto, Fabiano; Peralba, Maria do Carmo Ruaro; Pizzolato, Tânia Mara; Silvia Díaz-Cruz, M; Barceló, Damià

    2015-01-01

    In residue analysis of veterinary drugs in foodstuff, matrix effects are one of the most critical points. This work present a discuss considering approaches used to estimate, minimize and monitoring matrix effects in bioanalytical methods. Qualitative and quantitative methods for estimation of matrix effects such as post-column infusion, slopes ratios analysis, calibration curves (mathematical and statistical analysis) and control chart monitoring are discussed using real data. Matrix effects varying in a wide range depending of the analyte and the sample preparation method: pressurized liquid extraction for liver samples show matrix effects from 15.5 to 59.2% while a ultrasound-assisted extraction provide values from 21.7 to 64.3%. The matrix influence was also evaluated: for sulfamethazine analysis, losses of signal were varying from -37 to -96% for fish and eggs, respectively. Advantages and drawbacks are also discussed considering a workflow for matrix effects assessment proposed and applied to real data from sulfonamides residues analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The cobas p 630 instrument: a dedicated pre-analytic solution to optimize COBAS® AmpliPrep/COBAS® TaqMan® system workflow and turn-around-time.

    PubMed

    Vallefuoco, L; Sorrentino, R; Spalletti Cernia, D; Colucci, G; Portella, G

    2012-12-01

    The cobas p 630, a fully automated pre-analytical instrument for primary tube handling recently introduced to complete the Cobas(®) TaqMan systems portfolio, was evaluated in conjunction with: the COBAS(®) AmpliPrep/COBAS(®) TaqMan HBV Test, v2.0, COBAS(®) AmpliPrep/COBAS(®) TaqMan HCV Test, v1.0 and COBAS(®) AmpliPrep/COBAS(®) TaqMan HIV Test, v2.0. The instrument performance in transferring samples from primary to secondary tubes, its impact in improving COBAS(®) AmpliPrep/COBAS(®) TaqMan workflow and hands-on reduction and the risk of possible cross-contamination were assessed. Samples from 42 HBsAg positive, 42 HCV and 42 HIV antibody (Ab) positive patients as well as 21 healthy blood donors were processed with or without automated primary tubes. HIV, HCV and HBsAg positive samples showed a correlation index of 0.999, 0.987 and of 0.994, respectively. To assess for cross-contamination, high titer HBV DNA positive samples, HCV RNA and HIV RNA positive samples were distributed in the cobas p 630 in alternate tube positions, adjacent to negative control samples within the same rack. None of the healthy donor samples showed any reactivity. Based on these results, the cobas p 630 can improve workflow and sample tracing in laboratories performing molecular tests, and reduce turnaround time, errors, and risks. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Relative impact of key sources of systematic noise in Affymetrix and Illumina gene-expression microarray experiments.

    PubMed

    Kitchen, Robert R; Sabine, Vicky S; Simen, Arthur A; Dixon, J Michael; Bartlett, John M S; Sims, Andrew H

    2011-12-01

    Systematic processing noise, which includes batch effects, is very common in microarray experiments but is often ignored despite its potential to confound or compromise experimental results. Compromised results are most likely when re-analysing or integrating datasets from public repositories due to the different conditions under which each dataset is generated. To better understand the relative noise-contributions of various factors in experimental-design, we assessed several Illumina and Affymetrix datasets for technical variation between replicate hybridisations of Universal Human Reference (UHRR) and individual or pooled breast-tumour RNA. A varying degree of systematic noise was observed in each of the datasets, however in all cases the relative amount of variation between standard control RNA replicates was found to be greatest at earlier points in the sample-preparation workflow. For example, 40.6% of the total variation in reported expressions were attributed to replicate extractions, compared to 13.9% due to amplification/labelling and 10.8% between replicate hybridisations. Deliberate probe-wise batch-correction methods were effective in reducing the magnitude of this variation, although the level of improvement was dependent on the sources of noise included in the model. Systematic noise introduced at the chip, run, and experiment levels of a combined Illumina dataset were found to be highly dependent upon the experimental design. Both UHRR and pools of RNA, which were derived from the samples of interest, modelled technical variation well although the pools were significantly better correlated (4% average improvement) and better emulated the effects of systematic noise, over all probes, than the UHRRs. The effect of this noise was not uniform over all probes, with low GC-content probes found to be more vulnerable to batch variation than probes with a higher GC-content. The magnitude of systematic processing noise in a microarray experiment is variable across probes and experiments, however it is generally the case that procedures earlier in the sample-preparation workflow are liable to introduce the most noise. Careful experimental design is important to protect against noise, detailed meta-data should always be provided, and diagnostic procedures should be routinely performed prior to downstream analyses for the detection of bias in microarray studies.

  7. Relative impact of key sources of systematic noise in Affymetrix and Illumina gene-expression microarray experiments

    PubMed Central

    2011-01-01

    Background Systematic processing noise, which includes batch effects, is very common in microarray experiments but is often ignored despite its potential to confound or compromise experimental results. Compromised results are most likely when re-analysing or integrating datasets from public repositories due to the different conditions under which each dataset is generated. To better understand the relative noise-contributions of various factors in experimental-design, we assessed several Illumina and Affymetrix datasets for technical variation between replicate hybridisations of Universal Human Reference (UHRR) and individual or pooled breast-tumour RNA. Results A varying degree of systematic noise was observed in each of the datasets, however in all cases the relative amount of variation between standard control RNA replicates was found to be greatest at earlier points in the sample-preparation workflow. For example, 40.6% of the total variation in reported expressions were attributed to replicate extractions, compared to 13.9% due to amplification/labelling and 10.8% between replicate hybridisations. Deliberate probe-wise batch-correction methods were effective in reducing the magnitude of this variation, although the level of improvement was dependent on the sources of noise included in the model. Systematic noise introduced at the chip, run, and experiment levels of a combined Illumina dataset were found to be highly dependant upon the experimental design. Both UHRR and pools of RNA, which were derived from the samples of interest, modelled technical variation well although the pools were significantly better correlated (4% average improvement) and better emulated the effects of systematic noise, over all probes, than the UHRRs. The effect of this noise was not uniform over all probes, with low GC-content probes found to be more vulnerable to batch variation than probes with a higher GC-content. Conclusions The magnitude of systematic processing noise in a microarray experiment is variable across probes and experiments, however it is generally the case that procedures earlier in the sample-preparation workflow are liable to introduce the most noise. Careful experimental design is important to protect against noise, detailed meta-data should always be provided, and diagnostic procedures should be routinely performed prior to downstream analyses for the detection of bias in microarray studies. PMID:22133085

  8. Development of a Rapid Point-of-Use DNA Test for the Screening of Genuity® Roundup Ready 2 Yield® Soybean in Seed Samples

    PubMed Central

    Chandu, Dilip; Paul, Sudakshina; Parker, Mathew; Dudin, Yelena; King-Sitzes, Jennifer; Perez, Tim; Mittanck, Don W.; Shah, Manali; Glenn, Kevin C.; Piepenburg, Olaf

    2016-01-01

    Testing for the presence of genetically modified material in seed samples is of critical importance for all stakeholders in the agricultural industry, including growers, seed manufacturers, and regulatory bodies. While rapid antibody-based testing for the transgenic protein has fulfilled this need in the past, the introduction of new variants of a given transgene demands new diagnostic regimen that allows distinguishing different traits at the nucleic acid level. Although such molecular tests can be performed by PCR in the laboratory, their requirement for expensive equipment and sophisticated operation have prevented its uptake in point-of-use applications. A recently developed isothermal DNA amplification technique, recombinase polymerase amplification (RPA), combines simple sample preparation and amplification work-flow procedures with the use of minimal detection equipment in real time. Here, we report the development of a highly sensitive and specific RPA-based detection system for Genuity Roundup Ready 2 Yield (RR2Y) material in soybean (Glycine max) seed samples and present the results of studies applying the method in both laboratory and field-type settings. PMID:27314015

  9. Multiplex PCR method for MinION and Illumina sequencing of Zika and other virus genomes directly from clinical samples

    PubMed Central

    Quick, Josh; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J

    2018-01-01

    Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples without isolation remains challenging for viruses such as Zika, where metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence complete genomes comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimised library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved starting with clinical samples in 1-2 days following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. PMID:28538739

  10. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  11. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  12. msCompare: A Framework for Quantitative Analysis of Label-free LC-MS Data for Comparative Candidate Biomarker Studies*

    PubMed Central

    Hoekman, Berend; Breitling, Rainer; Suits, Frank; Bischoff, Rainer; Horvatovich, Peter

    2012-01-01

    Data processing forms an integral part of biomarker discovery and contributes significantly to the ultimate result. To compare and evaluate various publicly available open source label-free data processing workflows, we developed msCompare, a modular framework that allows the arbitrary combination of different feature detection/quantification and alignment/matching algorithms in conjunction with a novel scoring method to evaluate their overall performance. We used msCompare to assess the performance of workflows built from modules of publicly available data processing packages such as SuperHirn, OpenMS, and MZmine and our in-house developed modules on peptide-spiked urine and trypsin-digested cerebrospinal fluid (CSF) samples. We found that the quality of results varied greatly among workflows, and interestingly, heterogeneous combinations of algorithms often performed better than the homogenous workflows. Our scoring method showed that the union of feature matrices of different workflows outperformed the original homogenous workflows in some cases. msCompare is open source software (https://trac.nbic.nl/mscompare), and we provide a web-based data processing service for our framework by integration into the Galaxy server of the Netherlands Bioinformatics Center (http://galaxy.nbic.nl/galaxy) to allow scientists to determine which combination of modules provides the most accurate processing for their particular LC-MS data sets. PMID:22318370

  13. Use of contextual inquiry to understand anatomic pathology workflow: Implications for digital pathology adoption

    PubMed Central

    Ho, Jonhan; Aridor, Orly; Parwani, Anil V.

    2012-01-01

    Background: For decades anatomic pathology (AP) workflow have been a highly manual process based on the use of an optical microscope and glass slides. Recent innovations in scanning and digitizing of entire glass slides are accelerating a move toward widespread adoption and implementation of a workflow based on digital slides and their supporting information management software. To support the design of digital pathology systems and ensure their adoption into pathology practice, the needs of the main users within the AP workflow, the pathologists, should be identified. Contextual inquiry is a qualitative, user-centered, social method designed to identify and understand users’ needs and is utilized for collecting, interpreting, and aggregating in-detail aspects of work. Objective: Contextual inquiry was utilized to document current AP workflow, identify processes that may benefit from the introduction of digital pathology systems, and establish design requirements for digital pathology systems that will meet pathologists’ needs. Materials and Methods: Pathologists were observed and interviewed at a large academic medical center according to contextual inquiry guidelines established by Holtzblatt et al. 1998. Notes representing user-provided data were documented during observation sessions. An affinity diagram, a hierarchal organization of the notes based on common themes in the data, was created. Five graphical models were developed to help visualize the data including sequence, flow, artifact, physical, and cultural models. Results: A total of six pathologists were observed by a team of two researchers. A total of 254 affinity notes were documented and organized using a system based on topical hierarchy, including 75 third-level, 24 second-level, and five main-level categories, including technology, communication, synthesis/preparation, organization, and workflow. Current AP workflow was labor intensive and lacked scalability. A large number of processes that may possibly improve following the introduction of digital pathology systems were identified. These work processes included case management, case examination and review, and final case reporting. Furthermore, a digital slide system should integrate with the anatomic pathologic laboratory information system. Conclusions: To our knowledge, this is the first study that utilized the contextual inquiry method to document AP workflow. Findings were used to establish key requirements for the design of digital pathology systems. PMID:23243553

  14. Making Sense of Complexity with FRE, a Scientific Workflow System for Climate Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Langenhorst, A. R.; Balaji, V.; Yakovlev, A.

    2010-12-01

    A workflow is a description of a sequence of activities that is both precise and comprehensive. Capturing the workflow of climate experiments provides a record which can be queried or compared, and allows reproducibility of the experiments - sometimes even to the bit level of the model output. This reproducibility helps to verify the integrity of the output data, and enables easy perturbation experiments. GFDL's Flexible Modeling System Runtime Environment (FRE) is a production-level software project which defines and implements building blocks of the workflow as command line tools. The scientific, numerical and technical input needed to complete the workflow of an experiment is recorded in an experiment description file in XML format. Several key features add convenience and automation to the FRE workflow: ● Experiment inheritance makes it possible to define a new experiment with only a reference to the parent experiment and the parameters to override. ● Testing is a basic element of the FRE workflow: experiments define short test runs which are verified before the main experiment is run, and a set of standard experiments are verified with new code releases. ● FRE is flexible enough to support short runs with mere megabytes of data, to high-resolution experiments that run on thousands of processors for months, producing terabytes of output data. Experiments run in segments of model time; after each segment, the state is saved and the model can be checkpointed at that level. Segment length is defined by the user, but the number of segments per system job is calculated to fit optimally in the batch scheduler requirements. FRE provides job control across multiple segments, and tools to monitor and alter the state of long-running experiments. ● Experiments are entered into a Curator Database, which stores query-able metadata about the experiment and the experiment's output. ● FRE includes a set of standardized post-processing functions as well as the ability to incorporate user-level functions. FRE post-processing can take us all the way to the preparing of graphical output for a scientific audience, and publication of data on a public portal. ● Recent FRE development includes incorporating a distributed workflow to support remote computing.

  15. Comparison of ribosomal RNA removal methods for transcriptome sequencing workflows in teleost fish

    USDA-ARS?s Scientific Manuscript database

    RNA sequencing (RNA-Seq) is becoming the standard for transcriptome analysis. Removal of contaminating ribosomal RNA (rRNA) is a priority in the preparation of libraries suitable for sequencing. rRNAs are commonly removed from total RNA via either mRNA selection or rRNA depletion. These methods have...

  16. Determining Data Information Literacy Needs: A Study of Students and Research Faculty

    ERIC Educational Resources Information Center

    Carlson, Jacob; Fosmire, Michael; Miller, C. C.; Nelson, Megan Sapp

    2011-01-01

    Researchers increasingly need to integrate the disposition, management, and curation of their data into their current workflows. However, it is not yet clear to what extent faculty and students are sufficiently prepared to take on these responsibilities. This paper articulates the need for a data information literacy program (DIL) to prepare…

  17. Considering Time in Orthophotography Production: from a General Workflow to a Shortened Workflow for a Faster Disaster Response

    NASA Astrophysics Data System (ADS)

    Lucas, G.

    2015-08-01

    This article overall deals with production time with orthophoto imagery with medium size digital frame camera. The workflow examination follows two main parts: data acquisition and post-processing. The objectives of the research are fourfold: 1/ gathering time references for the most important steps of orthophoto production (it turned out that literature is missing on this topic); these figures are used later for total production time estimation; 2/ identifying levers for reducing orthophoto production time; 3/ building a simplified production workflow for emergency response: less exigent with accuracy and faster; and compare it to a classical workflow; 4/ providing methodical elements for the estimation of production time with a custom project. In the data acquisition part a comprehensive review lists and describes all the factors that may affect the acquisition efficiency. Using a simulation with different variables (average line length, time of the turns, flight speed) their effect on acquisition efficiency is quantitatively examined. Regarding post-processing, the time references figures were collected from the processing of a 1000 frames case study with 15 cm GSD covering a rectangular area of 447 km2; the time required to achieve each step during the production is written down. When several technical options are possible, each one is tested and time documented so as all alternatives are available. Based on a technical choice with the workflow and using the compiled time reference of the elementary steps, a total time is calculated for the post-processing of the 1000 frames. Two scenarios are compared as regards to time and accuracy. The first one follows the "normal" practices, comprising triangulation, orthorectification and advanced mosaicking methods (feature detection, seam line editing and seam applicator); the second is simplified and make compromise over positional accuracy (using direct geo-referencing) and seamlines preparation in order to achieve orthophoto production faster. The shortened workflow reduces the production time by more than three whereas the positional error increases from 1 GSD to 1.5 GSD. The examination of time allocation through the production process shows that it is worth sparing time in the post-processing phase.

  18. Evaluation of real-time data obtained from gravimetric preparation of antineoplastic agents shows medication errors with possible critical therapeutic impact: Results of a large-scale, multicentre, multinational, retrospective study.

    PubMed

    Terkola, R; Czejka, M; Bérubé, J

    2017-08-01

    Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.

  19. On-Line Electrochemical Reduction of Disulfide Bonds: Improved FTICR-CID and -ETD Coverage of Oxytocin and Hepcidin

    NASA Astrophysics Data System (ADS)

    Nicolardi, Simone; Giera, Martin; Kooijman, Pieter; Kraj, Agnieszka; Chervet, Jean-Pierre; Deelder, André M.; van der Burgt, Yuri E. M.

    2013-12-01

    Particularly in the field of middle- and top-down peptide and protein analysis, disulfide bridges can severely hinder fragmentation and thus impede sequence analysis (coverage). Here we present an on-line/electrochemistry/ESI-FTICR-MS approach, which was applied to the analysis of the primary structure of oxytocin, containing one disulfide bridge, and of hepcidin, containing four disulfide bridges. The presented workflow provided up to 80 % (on-line) conversion of disulfide bonds in both peptides. With minimal sample preparation, such reduction resulted in a higher number of peptide backbone cleavages upon CID or ETD fragmentation, and thus yielded improved sequence coverage. The cycle times, including electrode recovery, were rapid and, therefore, might very well be coupled with liquid chromatography for protein or peptide separation, which has great potential for high-throughput analysis.

  20. Technology platform development for targeted plasma metabolites in human heart failure.

    PubMed

    Chan, Cy X'avia; Khan, Anjum A; Choi, Jh Howard; Ng, Cm Dominic; Cadeiras, Martin; Deng, Mario; Ping, Peipei

    2013-01-01

    Heart failure is a multifactorial disease associated with staggeringly high morbidity and motility. Recently, alterations of multiple metabolites have been implicated in heart failure; however, the lack of an effective technology platform to assess these metabolites has limited our understanding on how they contribute to this disease phenotype. We have successfully developed a new workflow combining specific sample preparation with tandem mass spectrometry that enables us to extract most of the targeted metabolites. 19 metabolites were chosen ascribing to their biological relevance to heart failure, including extracellular matrix remodeling, inflammation, insulin resistance, renal dysfunction, and cardioprotection against ischemic injury. In this report, we systematically engineered, optimized and refined a protocol applicable to human plasma samples; this study contributes to the methodology development with respect to deproteinization, incubation, reconstitution, and detection with mass spectrometry. The deproteinization step was optimized with 20% methanol/ethanol at a plasma:solvent ratio of 1:3. Subsequently, an incubation step was implemented which remarkably enhanced the metabolite signals and the number of metabolite peaks detected by mass spectrometry in both positive and negative modes. With respect to the step of reconstitution, 0.1% formic acid was designated as the reconstitution solvent vs. 6.5 mM ammonium bicarbonate, based on the comparable number of metabolite peaks detected in both solvents, and yet the signal detected in the former was higher. By adapting this finalized protocol, we were able to retrieve 13 out of 19 targeted metabolites from human plasma. We have successfully devised a simple albeit effective workflow for the targeted plasma metabolites relevant to human heart failure. This will be employed in tandem with high throughput liquid chromatography mass spectrometry platform to validate and characterize these potential metabolic biomarkers for diagnostic and therapeutic development of heart failure patients.

  1. Implementation of Epic Beaker Anatomic Pathology at an Academic Medical Center.

    PubMed

    Blau, John Larry; Wilford, Joseph D; Dane, Susan K; Karandikar, Nitin J; Fuller, Emily S; Jacobsmeier, Debbie J; Jans, Melissa A; Horning, Elisabeth A; Krasowski, Matthew D; Ford, Bradley A; Becker, Kent R; Beranek, Jeanine M; Robinson, Robert A

    2017-01-01

    Beaker is a relatively new laboratory information system (LIS) offered by Epic Systems Corporation as part of its suite of health-care software and bundled with its electronic medical record, EpicCare. It is divided into two modules, Beaker anatomic pathology (Beaker AP) and Beaker Clinical Pathology. In this report, we describe our experience implementing Beaker AP version 2014 at an academic medical center with a go-live date of October 2015. This report covers preimplementation preparations and challenges beginning in September 2014, issues discovered soon after go-live in October 2015, and some post go-live optimizations using data from meetings, debriefings, and the project closure document. We share specific issues that we encountered during implementation, including difficulties with the proposed frozen section workflow, developing a shared specimen source dictionary, and implementation of the standard Beaker workflow in large institution with trainees. We share specific strategies that we used to overcome these issues for a successful Beaker AP implementation. Several areas of the laboratory-required adaptation of the default Beaker build parameters to meet the needs of the workflow in a busy academic medical center. In a few areas, our laboratory was unable to use the Beaker functionality to support our workflow, and we have continued to use paper or have altered our workflow. In spite of several difficulties that required creative solutions before go-live, the implementation has been successful based on satisfaction surveys completed by pathologists and others who use the software. However, optimization of Beaker workflows has continued to be an ongoing process after go-live to the present time. The Beaker AP LIS can be successfully implemented at an academic medical center but requires significant forethought, creative adaptation, and continued shared management of the ongoing product by institutional and departmental information technology staff as well as laboratory managers to meet the needs of the laboratory.

  2. Successful adaption of a forensic toxicological screening workflow employing nontargeted liquid chromatography-tandem mass spectrometry to water analysis.

    PubMed

    Steger, Julia; Arnhard, Kathrin; Haslacher, Sandra; Geiger, Klemens; Singer, Klaus; Schlapp, Michael; Pitterl, Florian; Oberacher, Herbert

    2016-04-01

    Forensic toxicology and environmental water analysis share the common interest and responsibility in ensuring comprehensive and reliable confirmation of drugs and pharmaceutical compounds in samples analyzed. Dealing with similar analytes, detection and identification techniques should be exchangeable between scientific disciplines. Herein, we demonstrate the successful adaption of a forensic toxicological screening workflow employing nontargeted LC/MS/MS under data-dependent acquisition control and subsequent database search to water analysis. The main modification involved processing of an increased sample volume with SPE (500 mL vs. 1-10 mL) to reach LODs in the low ng/L range. Tandem mass spectra acquired with a qTOF instrument were submitted to database search. The targeted data mining strategy was found to be sensitive and specific; automated search produced hardly any false results. To demonstrate the applicability of the adapted workflow to complex samples, 14 wastewater effluent samples collected on seven consecutive days at the local wastewater-treatment plant were analyzed. Of the 88,970 fragment ion mass spectra produced, 8.8% of spectra were successfully assigned to one of the 1040 reference compounds included in the database, and this enabled the identification of 51 compounds representing important illegal drugs, members of various pharmaceutical compound classes, and metabolites thereof. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Comparative evaluation of the Cobas Amplicor HIV-1 Monitor Ultrasensitive Test, the new Cobas AmpliPrep/Cobas Amplicor HIV-1 Monitor Ultrasensitive Test and the Versant HIV RNA 3.0 assays for quantitation of HIV-1 RNA in plasma samples.

    PubMed

    Berger, Annemarie; Scherzed, Lina; Stürmer, Martin; Preiser, Wolfgang; Doerr, Hans Wilhelm; Rabenau, Holger Felix

    2005-05-01

    There are several commercially available assays for the quantitation of HIV RNA. A new automated specimen preparation system, the Cobas AmpliPrep, was developed to automate this last part of the PCR. We compared the results obtained by the Roche Cobas Amplicor HIV-1 Monitor Ultrasensitive Test (MCA, manual sample preparation) with those by the Versant HIV-1 RNA 3.0 assay (bDNA). Secondly we compared the MCA with the new Cobas AmpliPrep/Cobas Amplicor HIV Monitor Ultrasensitive Test (CAP/CA, automated specimen preparation) by investigating clinical patient samples and a panel of HIV-1 non-B subtypes. Furthermore, we assessed the assay throughput and workflow (especially hands-on time) for all three assays. Seventy-two percent of the 140 investigated patient samples gave concordant results in the bDNA and MCA assays. The MCA values were regularly higher than the bDNA values. One sample was detected only by the MCA within the linear range of quantification. In contrast, 38 samples with results <50 copies/ml in the MCA showed in the bDNA results between 51 and 1644 copies/ml (mean value 74 copies/ml); 21 of these specimens were shown to have detectable HIV RNA < 50 copies/ml in the MCA assay. The overall agreement between the MCA and the CAP/CA was 94.3% (551/584). The quantification results showed significant correlation, although the CAP/CA generated values slightly lower than those generated by the manual procedure. We found that the CAP/CA produced comparable results with the MCA test in a panel of HIV-1 non-B subtypes. All three assays showed comparable results. The bDNA provides a high sample throughput without the need of full automation. The new CAP/CA provides reliable test results with no HIV-subtype specific influence and releases time for other works in the laboratory; thus it is suitable for routine diagnostic PCR.

  4. PCR identification of bacteria in blood culture does not fit the daily workflow of a routine microbiology laboratory.

    PubMed

    Karumaa, Santra; Kärpänoja, Pauliina; Sarkkinen, Hannu

    2012-03-01

    We have evaluated the GenoType blood culture assay (Hain Lifescience, Nehren, Germany) for the identification of bacteria in 233 positive blood cultures and assessed its suitability in the workflow of a routine microbiology laboratory. In 68/233 (29.2%) samples, the culture result could not be confirmed by the GenoType assay due to a lack of primers in the test, multiple organisms in the sample, or inconsistency with respect to the identification by culture. Although the GenoType blood culture assay gives satisfactory results for bacteria for which primers are available, there are difficulties in applying the test in the routine microbiology laboratory.

  5. Optimized small molecule antibody labeling efficiency through continuous flow centrifugal diafiltration.

    PubMed

    Cappione, Amedeo; Mabuchi, Masaharu; Briggs, David; Nadler, Timothy

    2015-04-01

    Protein immuno-detection encompasses a broad range of analytical methodologies, including western blotting, flow cytometry, and microscope-based applications. These assays which detect, quantify, and/or localize expression for one or more proteins in complex biological samples, are reliant upon fluorescent or enzyme-tagged target-specific antibodies. While small molecule labeling kits are available with a range of detection moieties, the workflow is hampered by a requirement for multiple dialysis-based buffer exchange steps that are both time-consuming and subject to sample loss. In a previous study, we briefly described an alternative method for small-scale protein labeling with small molecule dyes whereby all phases of the conjugation workflow could be performed in a single centrifugal diafiltration device. Here, we expand on this foundational work addressing functionality of the device at each step in the workflow (sample cleanup, labeling, unbound dye removal, and buffer exchange/concentration) and the implications for optimizing labeling efficiency. When compared to other common buffer exchange methodologies, centrifugal diafiltration offered superior performance as measured by four key parameters (process time, desalting capacity, protein recovery, retain functional integrity). Originally designed for resin-based affinity purification, the device also provides a platform for up-front antibody purification or albumin carrier removal. Most significantly, by exploiting the rapid kinetics of NHS-based labeling reactions, the process of continuous diafiltration minimizes reaction time and long exposure to excess dye, guaranteeing maximal target labeling while limiting the risks associated with over-labeling. Overall, the device offers a simplified workflow with reduced processing time and hands-on requirements, without sacrificing labeling efficiency, final yield, or conjugate performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  7. Development of isotope labeling LC-MS for human salivary metabolomics and application to profiling metabolome changes associated with mild cognitive impairment.

    PubMed

    Zheng, Jiamin; Dixon, Roger A; Li, Liang

    2012-12-18

    Saliva is a readily available biofluid that may contain metabolites of interest for diagnosis and prognosis of diseases. In this work, a differential (13)C/(12)C isotope dansylation labeling method, combined with liquid chromatography Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR-MS), is described for quantitative profiling of the human salivary metabolome. New strategies are presented to optimize the sample preparation and LC-MS detection processes. The strategies allow the use of as little of 5 μL of saliva sample as a starting material to determine the concentration changes of an average of 1058 ion pairs or putative metabolites in comparative saliva samples. The overall workflow consists of several steps including acetone-induced protein precipitation, (12)C-dansylation labeling of the metabolites, and LC-UV measurement of the total concentration of the labeled metabolites in individual saliva samples. A pooled sample was prepared from all the individual samples and labeled with (13)C-dansylation to serve as a reference. Using this metabolome profiling method, it was found that compatible metabolome results could be obtained after saliva samples were stored in tubes normally used for genetic material collection at room temperature, -20 °C freezer, and -80 °C freezer over a period of 1 month, suggesting that many saliva samples already collected in genomic studies could become a valuable resource for metabolomics studies, although the effect of much longer term of storage remains to be determined. Finally, the developed method was applied for analyzing the metabolome changes of two different groups: normal healthy older adults and comparable older adults with mild cognitive impairment (MCI). Top-ranked 18 metabolites successfully distinguished the two groups, among which seven metabolites were putatively identified while one metabolite, taurine, was definitively identified.

  8. Benchmarking quantitative label-free LC-MS data processing workflows using a complex spiked proteomic standard dataset.

    PubMed

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Van Dorssaeler, Alain; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-01-30

    Proteomic workflows based on nanoLC-MS/MS data-dependent-acquisition analysis have progressed tremendously in recent years. High-resolution and fast sequencing instruments have enabled the use of label-free quantitative methods, based either on spectral counting or on MS signal analysis, which appear as an attractive way to analyze differential protein expression in complex biological samples. However, the computational processing of the data for label-free quantification still remains a challenge. Here, we used a proteomic standard composed of an equimolar mixture of 48 human proteins (Sigma UPS1) spiked at different concentrations into a background of yeast cell lysate to benchmark several label-free quantitative workflows, involving different software packages developed in recent years. This experimental design allowed to finely assess their performances in terms of sensitivity and false discovery rate, by measuring the number of true and false-positive (respectively UPS1 or yeast background proteins found as differential). The spiked standard dataset has been deposited to the ProteomeXchange repository with the identifier PXD001819 and can be used to benchmark other label-free workflows, adjust software parameter settings, improve algorithms for extraction of the quantitative metrics from raw MS data, or evaluate downstream statistical methods. Bioinformatic pipelines for label-free quantitative analysis must be objectively evaluated in their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. This can be done through the use of complex spiked samples, for which the "ground truth" of variant proteins is known, allowing a statistical evaluation of the performances of the data processing workflow. We provide here such a controlled standard dataset and used it to evaluate the performances of several label-free bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, for detection of variant proteins with different absolute expression levels and fold change values. The dataset presented here can be useful for tuning software tool parameters, and also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Development of Two Analytical Methods Based on Reverse Phase Chromatographic and SDS-PAGE Gel for Assessment of Deglycosylation Yield in N-Glycan Mapping.

    PubMed

    Eckard, Anahita D; Dupont, David R; Young, Johnie K

    2018-01-01

    N -lined glycosylation is one of the critical quality attributes (CQA) for biotherapeutics impacting the safety and activity of drug product. Changes in pattern and level of glycosylation can significantly alter the intrinsic properties of the product and, therefore, have to be monitored throughout its lifecycle. Therefore fast, precise, and unbiased N -glycan mapping assay is desired. To ensure these qualities, using analytical methods that evaluate completeness of deglycosylation is necessary. For quantification of deglycosylation yield, methods such as reduced liquid chromatography-mass spectrometry (LC-MS) and reduced capillary gel electrophoresis (CGE) have been commonly used. Here we present development of two additional methods to evaluate deglycosylation yield: one based on LC using reverse phase (RP) column and one based on reduced sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE gel) with offline software (GelAnalyzer). With the advent of rapid deglycosylation workflows in the market for N -glycan profiling replacing overnight incubation, we have aimed to quantify the level of deglycosylation in a selected rapid deglycosylation workflow. Our results have shown well resolved peaks of glycosylated and deglycosylated protein species with RP-LC method allowing simple quantification of deglycosylation yield of protein with high confidence. Additionally a good correlation, ≥0.94, was found between deglycosylation yields estimated by RP-LC method and that of reduced SDS-PAGE gel method with offline software. Evaluation of rapid deglycosylation protocol from GlycanAssure™ HyPerformance assay kit performed on fetuin and RNase B has shown complete deglycosylation within the recommended protocol time when evaluated with these techniques. Using this kit, N -glycans from NIST mAb were prepared in 1.4 hr and analyzed by hydrophilic interaction chromatography (HILIC) ultrahigh performance LC (UHPLC) equipped with a fluorescence detector (FLD). 37 peaks were resolved with good resolution. Excellent sample preparation repeatability was found with relative standard deviation (RSD) of <5% for peaks with >0.5% relative area.

  11. Genarris: Random generation of molecular crystal structures and fast screening with a Harris approximation

    NASA Astrophysics Data System (ADS)

    Li, Xiayue; Curtis, Farren S.; Rose, Timothy; Schober, Christoph; Vazquez-Mayagoitia, Alvaro; Reuter, Karsten; Oberhofer, Harald; Marom, Noa

    2018-06-01

    We present Genarris, a Python package that performs configuration space screening for molecular crystals of rigid molecules by random sampling with physical constraints. For fast energy evaluations, Genarris employs a Harris approximation, whereby the total density of a molecular crystal is constructed via superposition of single molecule densities. Dispersion-inclusive density functional theory is then used for the Harris density without performing a self-consistency cycle. Genarris uses machine learning for clustering, based on a relative coordinate descriptor developed specifically for molecular crystals, which is shown to be robust in identifying packing motif similarity. In addition to random structure generation, Genarris offers three workflows based on different sequences of successive clustering and selection steps: the "Rigorous" workflow is an exhaustive exploration of the potential energy landscape, the "Energy" workflow produces a set of low energy structures, and the "Diverse" workflow produces a maximally diverse set of structures. The latter is recommended for generating initial populations for genetic algorithms. Here, the implementation of Genarris is reported and its application is demonstrated for three test cases.

  12. 3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy

    PubMed Central

    Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.

    2017-01-01

    ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312

  13. TCGA Workflow: Analyze cancer genomics and epigenomics data using Bioconductor packages

    PubMed Central

    Bontempi, Gianluca; Ceccarelli, Michele; Noushmehr, Houtan

    2016-01-01

    Biotechnological advances in sequencing have led to an explosion of publicly available data via large international consortia such as The Cancer Genome Atlas (TCGA), The Encyclopedia of DNA Elements (ENCODE), and The NIH Roadmap Epigenomics Mapping Consortium (Roadmap). These projects have provided unprecedented opportunities to interrogate the epigenome of cultured cancer cell lines as well as normal and tumor tissues with high genomic resolution. The Bioconductor project offers more than 1,000 open-source software and statistical packages to analyze high-throughput genomic data. However, most packages are designed for specific data types (e.g. expression, epigenetics, genomics) and there is no one comprehensive tool that provides a complete integrative analysis of the resources and data provided by all three public projects. A need to create an integration of these different analyses was recently proposed. In this workflow, we provide a series of biologically focused integrative analyses of different molecular data. We describe how to download, process and prepare TCGA data and by harnessing several key Bioconductor packages, we describe how to extract biologically meaningful genomic and epigenomic data. Using Roadmap and ENCODE data, we provide a work plan to identify biologically relevant functional epigenomic elements associated with cancer. To illustrate our workflow, we analyzed two types of brain tumors: low-grade glioma (LGG) versus high-grade glioma (glioblastoma multiform or GBM). This workflow introduces the following Bioconductor packages: AnnotationHub, ChIPSeeker, ComplexHeatmap, pathview, ELMER, GAIA, MINET, RTCGAToolbox,  TCGAbiolinks. PMID:28232861

  14. TCGA Workflow: Analyze cancer genomics and epigenomics data using Bioconductor packages.

    PubMed

    Silva, Tiago C; Colaprico, Antonio; Olsen, Catharina; D'Angelo, Fulvio; Bontempi, Gianluca; Ceccarelli, Michele; Noushmehr, Houtan

    2016-01-01

    Biotechnological advances in sequencing have led to an explosion of publicly available data via large international consortia such as The Cancer Genome Atlas (TCGA), The Encyclopedia of DNA Elements (ENCODE), and The NIH Roadmap Epigenomics Mapping Consortium (Roadmap). These projects have provided unprecedented opportunities to interrogate the epigenome of cultured cancer cell lines as well as normal and tumor tissues with high genomic resolution. The Bioconductor project offers more than 1,000 open-source software and statistical packages to analyze high-throughput genomic data. However, most packages are designed for specific data types (e.g. expression, epigenetics, genomics) and there is no one comprehensive tool that provides a complete integrative analysis of the resources and data provided by all three public projects. A need to create an integration of these different analyses was recently proposed. In this workflow, we provide a series of biologically focused integrative analyses of different molecular data. We describe how to download, process and prepare TCGA data and by harnessing several key Bioconductor packages, we describe how to extract biologically meaningful genomic and epigenomic data. Using Roadmap and ENCODE data, we provide a work plan to identify biologically relevant functional epigenomic elements associated with cancer. To illustrate our workflow, we analyzed two types of brain tumors: low-grade glioma (LGG) versus high-grade glioma (glioblastoma multiform or GBM). This workflow introduces the following Bioconductor packages: AnnotationHub, ChIPSeeker, ComplexHeatmap, pathview, ELMER, GAIA, MINET, RTCGAToolbox,  TCGAbiolinks.

  15. Quantitative Assessment of In-solution Digestion Efficiency Identifies Optimal Protocols for Unbiased Protein Analysis*

    PubMed Central

    León, Ileana R.; Schwämmle, Veit; Jensen, Ole N.; Sprenger, Richard R.

    2013-01-01

    The majority of mass spectrometry-based protein quantification studies uses peptide-centric analytical methods and thus strongly relies on efficient and unbiased protein digestion protocols for sample preparation. We present a novel objective approach to assess protein digestion efficiency using a combination of qualitative and quantitative liquid chromatography-tandem MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein fractions. We evaluated nine trypsin-based digestion protocols, based on standard in-solution or on spin filter-aided digestion, including new optimized protocols. We investigated various reagents for protein solubilization and denaturation (dodecyl sulfate, deoxycholate, urea), several trypsin digestion conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents before analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative liquid chromatography-tandem MS workflow quantified over 3700 distinct peptides with 96% completeness between all protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows for efficient, unbiased generation and recovery of peptides from all protein classes, including membrane proteins. This deoxycholate-assisted protocol was also optimal for spin filter-aided digestions as compared with existing methods. PMID:23792921

  16. Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.

    PubMed

    Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor

    2016-01-01

    In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.

  17. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  18. SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog

    NASA Astrophysics Data System (ADS)

    Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely

    2014-05-01

    Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.

  19. Domain-Adapted Convolutional Networks for Satellite Image Classification: A Large-Scale Interactive Learning Workflow

    DOE PAGES

    Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...

    2018-02-06

    Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less

  20. Domain-Adapted Convolutional Networks for Satellite Image Classification: A Large-Scale Interactive Learning Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.

    Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less

  1. Developing a workflow to identify inconsistencies in volunteered geographic information: a phenological case study

    USGS Publications Warehouse

    Mehdipoor, Hamed; Zurita-Milla, Raul; Rosemartin, Alyssa; Gerst, Katharine L.; Weltzin, Jake F.

    2015-01-01

    Recent improvements in online information communication and mobile location-aware technologies have led to the production of large volumes of volunteered geographic information. Widespread, large-scale efforts by volunteers to collect data can inform and drive scientific advances in diverse fields, including ecology and climatology. Traditional workflows to check the quality of such volunteered information can be costly and time consuming as they heavily rely on human interventions. However, identifying factors that can influence data quality, such as inconsistency, is crucial when these data are used in modeling and decision-making frameworks. Recently developed workflows use simple statistical approaches that assume that the majority of the information is consistent. However, this assumption is not generalizable, and ignores underlying geographic and environmental contextual variability that may explain apparent inconsistencies. Here we describe an automated workflow to check inconsistency based on the availability of contextual environmental information for sampling locations. The workflow consists of three steps: (1) dimensionality reduction to facilitate further analysis and interpretation of results, (2) model-based clustering to group observations according to their contextual conditions, and (3) identification of inconsistent observations within each cluster. The workflow was applied to volunteered observations of flowering in common and cloned lilac plants (Syringa vulgaris and Syringa x chinensis) in the United States for the period 1980 to 2013. About 97% of the observations for both common and cloned lilacs were flagged as consistent, indicating that volunteers provided reliable information for this case study. Relative to the original dataset, the exclusion of inconsistent observations changed the apparent rate of change in lilac bloom dates by two days per decade, indicating the importance of inconsistency checking as a key step in data quality assessment for volunteered geographic information. Initiatives that leverage volunteered geographic information can adapt this workflow to improve the quality of their datasets and the robustness of their scientific analyses.

  2. SigWin-detector: a Grid-enabled workflow for discovering enriched windows of genomic features related to DNA sequences.

    PubMed

    Inda, Márcia A; van Batenburg, Marinus F; Roos, Marco; Belloum, Adam S Z; Vasunin, Dmitry; Wibisono, Adianto; van Kampen, Antoine H C; Breit, Timo M

    2008-08-08

    Chromosome location is often used as a scaffold to organize genomic information in both the living cell and molecular biological research. Thus, ever-increasing amounts of data about genomic features are stored in public databases and can be readily visualized by genome browsers. To perform in silico experimentation conveniently with this genomics data, biologists need tools to process and compare datasets routinely and explore the obtained results interactively. The complexity of such experimentation requires these tools to be based on an e-Science approach, hence generic, modular, and reusable. A virtual laboratory environment with workflows, workflow management systems, and Grid computation are therefore essential. Here we apply an e-Science approach to develop SigWin-detector, a workflow-based tool that can detect significantly enriched windows of (genomic) features in a (DNA) sequence in a fast and reproducible way. For proof-of-principle, we utilize a biological use case to detect regions of increased and decreased gene expression (RIDGEs and anti-RIDGEs) in human transcriptome maps. We improved the original method for RIDGE detection by replacing the costly step of estimation by random sampling with a faster analytical formula for computing the distribution of the null hypothesis being tested and by developing a new algorithm for computing moving medians. SigWin-detector was developed using the WS-VLAM workflow management system and consists of several reusable modules that are linked together in a basic workflow. The configuration of this basic workflow can be adapted to satisfy the requirements of the specific in silico experiment. As we show with the results from analyses in the biological use case on RIDGEs, SigWin-detector is an efficient and reusable Grid-based tool for discovering windows enriched for features of a particular type in any sequence of values. Thus, SigWin-detector provides the proof-of-principle for the modular e-Science based concept of integrative bioinformatics experimentation.

  3. CONNJUR Workflow Builder: A software integration environment for spectral reconstruction

    PubMed Central

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O.; Ellis, Heidi J.C.; Gryk, Michael R.

    2015-01-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses. PMID:26066803

  4. Integrated pathway-based transcription regulation network mining and visualization based on gene expression profiles.

    PubMed

    Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko

    2016-06-01

    Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Theoretical predictor for candidate structure assignment from IMS data of biomolecule-related conformational space.

    PubMed

    Schenk, Emily R; Nau, Frederic; Fernandez-Lima, Francisco

    2015-06-01

    The ability to correlate experimental ion mobility data with candidate structures from theoretical modeling provides a powerful analytical and structural tool for the characterization of biomolecules. In the present paper, a theoretical workflow is described to generate and assign candidate structures for experimental trapped ion mobility and H/D exchange (HDX-TIMS-MS) data following molecular dynamics simulations and statistical filtering. The applicability of the theoretical predictor is illustrated for a peptide and protein example with multiple conformations and kinetic intermediates. The described methodology yields a low computational cost and a simple workflow by incorporating statistical filtering and molecular dynamics simulations. The workflow can be adapted to different IMS scenarios and CCS calculators for a more accurate description of the IMS experimental conditions. For the case of the HDX-TIMS-MS experiments, molecular dynamics in the "TIMS box" accounts for a better sampling of the molecular intermediates and local energy minima.

  6. Genetic analysis of circulating tumor cells in pancreatic cancer patients: A pilot study.

    PubMed

    Görner, Karin; Bachmann, Jeannine; Holzhauer, Claudia; Kirchner, Roland; Raba, Katharina; Fischer, Johannes C; Martignoni, Marc E; Schiemann, Matthias; Alunni-Fabbroni, Marianna

    2015-07-01

    Pancreatic cancer is one of the most aggressive malignant tumors, mainly due to an aggressive metastasis spreading. In recent years, circulating tumor cells became associated to tumor metastasis. Little is known about their expression profiles. The aim of this study was to develop a complete workflow making it possible to isolate circulating tumor cells from patients with pancreatic cancer and their genetic characterization. We show that the proposed workflow offers a technical sensitivity and specificity high enough to detect and isolate single tumor cells. Moreover our approach makes feasible to genetically characterize single CTCs. Our work discloses a complete workflow to detect, count and genetically analyze individual CTCs isolated from blood samples. This method has a central impact on the early detection of metastasis development. The combination of cell quantification and genetic analysis provides the clinicians with a powerful tool not available so far. Copyright © 2015. Published by Elsevier Inc.

  7. CONNJUR Workflow Builder: a software integration environment for spectral reconstruction.

    PubMed

    Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert; Martyn, Timothy O; Ellis, Heidi J C; Gryk, Michael R

    2015-07-01

    CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses.

  8. Experience with the use of the Codonics Safe Label System(™) to improve labelling compliance of anaesthesia drugs.

    PubMed

    Ang, S B L; Hing, W C; Tung, S Y; Park, T

    2014-07-01

    The Codonics Safe Labeling System(™) (http://www.codonics.com/Products/SLS/flash/) is a piece of equipment that is able to barcode scan medications, read aloud the medication and the concentration and print a label of the appropriate concentration in the appropriate colour code. We decided to test this system in our facility to identify risks, benefits and usability. Our project comprised a baseline survey (25 anaesthesia cases during which 212 syringes were prepared from 223 drugs), an observational study (47 cases with 330 syringes prepared) and a user acceptability survey. The baseline compliance with all labelling requirements was 58%. In the observational study the compliance using the Codonics system was 98.6% versus 63.8% with conventional labelling. In the user acceptability survey the majority agreed the Codonics machine was easy to use, more legible and adhered with better security than the conventional preprinted label. However, most were neutral when asked about the likelihood of flexibility and customisation and were dissatisfied with the increased workload. Our findings suggest that the Codonics labelling machine is user-friendly and it improved syringe labelling compliance in our study. However, staff need to be willing to follow proper labelling workflow rather than batch label during preparation. Future syringe labelling equipment developers need to concentrate on user interface issues to reduce human factor and workflow problems. Support logistics are also an important consideration prior to implementation of any new labelling system.

  9. A new carbon-based magnetic material for the dispersive solid-phase extraction of UV filters from water samples before liquid chromatography-tandem mass spectrometry analysis.

    PubMed

    Piovesana, Susy; Capriotti, Anna Laura; Cavaliere, Chiara; La Barbera, Giorgia; Samperi, Roberto; Zenezini Chiozzi, Riccardo; Laganà, Aldo

    2017-07-01

    Magnetic solid-phase extraction is one of the most promising new extraction methods for liquid samples before ultra-high-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis. Several types of materials, including carbonaceous ones, have been prepared for this purpose. In this paper, for the first time, the preparation, characterization, and sorption capability of Fe 3 O 4 -graphitized carbon black (mGCB) composite toward some compounds of environmental interest were investigated. The synthesized mGCB consisted of micrometric GCB particles with 55 m 2  g -1 surface area bearing some carbonyl and hydroxyl functionalities and the surface partially decorated by Fe 3 O 4 microparticles. The prepared mGCB was firstly tested as an adsorbent for the extraction from surface water of 50 pollutants, including estrogens, perfluoroalkyl compounds, UV filters, and quinolones. The material showed good affinity to many of the tested compounds, except carboxylates and glucoronates; however, some compounds were difficult to desorb. Ten UV filters belonging to the chemical classes of benzophenones and p-aminobenzoates were selected, and parameters were optimized for the extraction of these compounds from surface water before UHPLC-MS/MS determination. Then, the method was validated in terms of linearity, trueness, intra-laboratory precision, and detection and quantification limits. In summary, the method performance (trueness, expressed as analytical recovery, 85-114%; RSD 5-15%) appears suitable for the determination of the selected compounds at the level of 10-100 ng L -1 , with detection limits in the range of 1-5 ng L -1 . Finally, the new method was compared with a published one, based on conventional solid-phase extraction with GCB, showing similar performance in real sample analysis. Graphical Abstract Workflow of the analytical method based on magnetic solid-phase extraction followed by LC-MS/MS determination.

  10. Comparison of peak-picking workflows for untargeted liquid chromatography/high-resolution mass spectrometry metabolomics data analysis.

    PubMed

    Rafiei, Atefeh; Sleno, Lekha

    2015-01-15

    Data analysis is a key step in mass spectrometry based untargeted metabolomics, starting with the generation of generic peak lists from raw liquid chromatography/mass spectrometry (LC/MS) data. Due to the use of various algorithms by different workflows, the results of different peak-picking strategies often differ widely. Raw LC/HRMS data from two types of biological samples (bile and urine), as well as a standard mixture of 84 metabolites, were processed with four peak-picking softwares: Peakview®, Markerview™, MetabolitePilot™ and XCMS Online. The overlaps between the results of each peak-generating method were then investigated. To gauge the relevance of peak lists, a database search using the METLIN online database was performed to determine which features had accurate masses matching known metabolites as well as a secondary filtering based on MS/MS spectral matching. In this study, only a small proportion of all peaks (less than 10%) were common to all four software programs. Comparison of database searching results showed peaks found uniquely by one workflow have less chance of being found in the METLIN metabolomics database and are even less likely to be confirmed by MS/MS. It was shown that the performance of peak-generating workflows has a direct impact on untargeted metabolomics results. As it was demonstrated that the peaks found in more than one peak detection workflow have higher potential to be identified by accurate mass as well as MS/MS spectrum matching, it is suggested to use the overlap of different peak-picking workflows as preliminary peak lists for more rugged statistical analysis in global metabolomics investigations. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Evaluation of Standardization of Transfer of Accountability between Inpatient Pharmacists.

    PubMed

    Tsoi, Vivian; Dewhurst, Norman; Tom, Elaine

    2018-01-01

    A compelling body of evidence supports the notion that transfer of accountability (TOA) improves communication, continuity of care, and patient safety. TOA involves the transmission and receipt of information between clinicians at each transition of care. Without a notification system alerting pharmacists to patient transfers, pharmacists' ability to seek out and complete TOA may be hindered. A standardized policy and process for TOA, with automated workflow, was implemented at the study hospital in 2015, to ensure consistency and timeliness of documentation by pharmacists. To evaluate pharmacists' adherence to and satisfaction with the TOA policy and process. A retrospective audit was conducted, using a random sample of individuals who were inpatients between June 2014 and February 2016. Transition points for TOA were identified, and the computerized pharmacy system was reviewed to determine whether TOA had been documented at each transition point. After the audit, an online survey was distributed to assess pharmacists' response to and satisfaction with the TOA policy and workflow. Before the TOA workflow was implemented, TOA documentation by pharmacists ranged from 11% (10/93) to 43% (48/111) of transitions. Eight months after implementation of the workflow, the rate of TOA documentation was 87% (68/78), exceeding the institution's target of 70%. Of the 32 pharmacists surveyed, most were satisfied with the TOA policy and agreed that the standardized workflow was simple to use, increased the number of TOAs provided and received, and improved the quality of completed TOAs. Respondents also indicated that the TOA workflow had improved patient care (mean score 4.09/5, standard deviation 0.64). The standardized TOA policy and process were well received by pharmacists, and resulted in consistent TOA documentation and a TOA documentation rate that exceeded the institutional target.

  12. REPRODUCIBLE RESEARCH WORKFLOW IN R FOR THE ANALYSIS OF PERSONALIZED HUMAN MICROBIOME DATA.

    PubMed

    Callahan, Benjamin; Proctor, Diana; Relman, David; Fukuyama, Julia; Holmes, Susan

    2016-01-01

    This article presents a reproducible research workflow for amplicon-based microbiome studies in personalized medicine created using Bioconductor packages and the knitr markdown interface.We show that sometimes a multiplicity of choices and lack of consistent documentation at each stage of the sequential processing pipeline used for the analysis of microbiome data can lead to spurious results. We propose its replacement with reproducible and documented analysis using R packages dada2, knitr, and phyloseq. This workflow implements both key stages of amplicon analysis: the initial filtering and denoising steps needed to construct taxonomic feature tables from error-containing sequencing reads (dada2), and the exploratory and inferential analysis of those feature tables and associated sample metadata (phyloseq). This workow facilitates reproducible interrogation of the full set of choices required in microbiome studies. We present several examples in which we leverage existing packages for analysis in a way that allows easy sharing and modification by others, and give pointers to articles that depend on this reproducible workflow for the study of longitudinal and spatial series analyses of the vaginal microbiome in pregnancy and the oral microbiome in humans with healthy dentition and intra-oral tissues.

  13. Agile parallel bioinformatics workflow management using Pwrake.

    PubMed

    Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro

    2011-09-08

    In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.

  14. Agile parallel bioinformatics workflow management using Pwrake

    PubMed Central

    2011-01-01

    Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows. PMID:21899774

  15. Workflows and performances in the ranking prediction of 2016 D3R Grand Challenge 2: lessons learned from a collaborative effort.

    PubMed

    Gao, Ying-Duo; Hu, Yuan; Crespo, Alejandro; Wang, Deping; Armacost, Kira A; Fells, James I; Fradera, Xavier; Wang, Hongwu; Wang, Huijun; Sherborne, Brad; Verras, Andreas; Peng, Zhengwei

    2018-01-01

    The 2016 D3R Grand Challenge 2 includes both pose and affinity or ranking predictions. This article is focused exclusively on affinity predictions submitted to the D3R challenge from a collaborative effort of the modeling and informatics group. Our submissions include ranking of 102 ligands covering 4 different chemotypes against the FXR ligand binding domain structure, and the relative binding affinity predictions of the two designated free energy subsets of 15 and 18 compounds. Using all the complex structures prepared in the same way allowed us to cover many types of workflows and compare their performances effectively. We evaluated typical workflows used in our daily structure-based design modeling support, which include docking scores, force field-based scores, QM/MM, MMGBSA, MD-MMGBSA, and MacroModel interaction energy estimations. The best performing methods for the two free energy subsets are discussed. Our results suggest that affinity ranking still remains very challenging; that the knowledge of more structural information does not necessarily yield more accurate predictions; and that visual inspection and human intervention are considerably important for ranking. Knowledge of the mode of action and protein flexibility along with visualization tools that depict polar and hydrophobic maps are very useful for visual inspection. QM/MM-based workflows were found to be powerful in affinity ranking and are encouraged to be applied more often. The standardized input and output enable systematic analysis and support methodology development and improvement for high level blinded predictions.

  16. Workflows and performances in the ranking prediction of 2016 D3R Grand Challenge 2: lessons learned from a collaborative effort

    NASA Astrophysics Data System (ADS)

    Gao, Ying-Duo; Hu, Yuan; Crespo, Alejandro; Wang, Deping; Armacost, Kira A.; Fells, James I.; Fradera, Xavier; Wang, Hongwu; Wang, Huijun; Sherborne, Brad; Verras, Andreas; Peng, Zhengwei

    2018-01-01

    The 2016 D3R Grand Challenge 2 includes both pose and affinity or ranking predictions. This article is focused exclusively on affinity predictions submitted to the D3R challenge from a collaborative effort of the modeling and informatics group. Our submissions include ranking of 102 ligands covering 4 different chemotypes against the FXR ligand binding domain structure, and the relative binding affinity predictions of the two designated free energy subsets of 15 and 18 compounds. Using all the complex structures prepared in the same way allowed us to cover many types of workflows and compare their performances effectively. We evaluated typical workflows used in our daily structure-based design modeling support, which include docking scores, force field-based scores, QM/MM, MMGBSA, MD-MMGBSA, and MacroModel interaction energy estimations. The best performing methods for the two free energy subsets are discussed. Our results suggest that affinity ranking still remains very challenging; that the knowledge of more structural information does not necessarily yield more accurate predictions; and that visual inspection and human intervention are considerably important for ranking. Knowledge of the mode of action and protein flexibility along with visualization tools that depict polar and hydrophobic maps are very useful for visual inspection. QM/MM-based workflows were found to be powerful in affinity ranking and are encouraged to be applied more often. The standardized input and output enable systematic analysis and support methodology development and improvement for high level blinded predictions.

  17. Developing a Collection of Composable Data Translation Software Units to Improve Efficiency and Reproducibility in Ecohydrologic Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.

    2017-12-01

    Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of bytes to be read as the number of computations it performs. When several components' executions are coordinated the overall arithmetic intensity increases, leading to increased efficiency.

  18. Impact of the digital revolution on the future of pharmaceutical formulation science.

    PubMed

    Leuenberger, Hans; Leuenberger, Michael N

    2016-05-25

    The ongoing digital revolution is no longer limited to the application of apps on the smart phone for daily needs but starts to affect also our professional life in formulation science. The software platform F-CAD (Formulation-Computer Aided Design) of CINCAP can be used to develop and test in silico capsule and tablet formulations. Such an approach allows the pharmaceutical industry to adopt the workflow of the automotive and aircraft industry. Thus, the first prototype of the drug delivery vehicle is prepared virtually by mimicking the composition (particle size distribution of the active drug substance and of the excipients within the tablet) and the process such as direct compression to obtain a defined porosity. The software is based on a cellular automaton (CA) process mimicking the dissolution profile of the capsule or tablet formulation. To take account of the type of dissolution equipment and all SOPs (Standard Operation Procedures) such as a single punch press to manufacture the tablet, a calibration of the F-CAD dissolution profile of the virtual tablet is needed. Thus, the virtual tablet becomes a copy of the real tablet. This statement is valid for all tablets manufactured within the same formulation design space. For this reason, it is important to define already for Clinical Phase I the formulation design space and to work only within this formulation design space consisting of the composition and the processes during all the Clinical Phases. Thus, it is not recommended to start with a simple capsule formulation as service dosage form and to change later to a market ready tablet formulation. The availability of F-CAD is a necessary, but not a sufficient condition to implement the workflow of the automotive and aircraft industry for developing and testing drug delivery vehicles. For a successful implementation of the new workflow, a harmonization of the equipment and the processes between the development and manufacturing departments is a must. In this context, the clinical samples for Clinical Phases I and II should be prepared with a mechanical simulator of the high-speed rotary press used for large batches for Clinical Phases III & IV. If not, the problem of working practically and virtually in different formulation design spaces will remain causing worldwide annually billion of $ losses according to the study of Benson and MacCabe. The harmonization of equipment and processes needs a close cooperation between the industrial pharmacist and the pharmaceutical engineer. In addition, Virtual Equipment Simulators (VESs) of small and large scale equipment for training and computer assisted scale-up would be desirable. A lean and intelligent management information and documentation system will improve the connectivity between the different work stations. Thus, in future, it may be possible to rent at low costs F-CAD as an IT (Information Technology) platform based on a cloud computing solution. By the adoption of the workflow of the automotive and aircraft industry significant savings, a reduced time to market, a lower attrition rate, and a much higher quality of the final marketed dosage form can be achieved. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Multidimensional electrostatic repulsion-hydrophilic interaction chromatography (ERLIC) for quantitative analysis of the proteome and phosphoproteome in clinical and biomedical research.

    PubMed

    Loroch, Stefan; Schommartz, Tim; Brune, Wolfram; Zahedi, René Peiman; Sickmann, Albert

    2015-05-01

    Quantitative proteomics and phosphoproteomics have become key disciplines in understanding cellular processes. Fundamental research can be done using cell culture providing researchers with virtually infinite sample amounts. In contrast, clinical, pre-clinical and biomedical research is often restricted to minute sample amounts and requires an efficient analysis with only micrograms of protein. To address this issue, we generated a highly sensitive workflow for combined LC-MS-based quantitative proteomics and phosphoproteomics by refining an ERLIC-based 2D phosphoproteomics workflow into an ERLIC-based 3D workflow covering the global proteome as well. The resulting 3D strategy was successfully used for an in-depth quantitative analysis of both, the proteome and the phosphoproteome of murine cytomegalovirus-infected mouse fibroblasts, a model system for host cell manipulation by a virus. In a 2-plex SILAC experiment with 150 μg of a tryptic digest per condition, the 3D strategy enabled the quantification of ~75% more proteins and even ~134% more peptides compared to the 2D strategy. Additionally, we could quantify ~50% more phosphoproteins by non-phosphorylated peptides, concurrently yielding insights into changes on the levels of protein expression and phosphorylation. Beside its sensitivity, our novel three-dimensional ERLIC-strategy has the potential for semi-automated sample processing rendering it a suitable future perspective for clinical, pre-clinical and biomedical research. Copyright © 2015. Published by Elsevier B.V.

  20. A robust ambient temperature collection and stabilization strategy: Enabling worldwide functional studies of the human microbiome

    PubMed Central

    Anderson, Ericka L.; Li, Weizhong; Klitgord, Niels; Highlander, Sarah K.; Dayrit, Mark; Seguritan, Victor; Yooseph, Shibu; Biggs, William; Venter, J. Craig; Nelson, Karen E.; Jones, Marcus B.

    2016-01-01

    As reports on possible associations between microbes and the host increase in number, more meaningful interpretations of this information require an ability to compare data sets across studies. This is dependent upon standardization of workflows to ensure comparability both within and between studies. Here we propose the standard use of an alternate collection and stabilization method that would facilitate such comparisons. The DNA Genotek OMNIgene∙Gut Stool Microbiome Kit was compared to the currently accepted community standard of freezing to store human stool samples prior to whole genome sequencing (WGS) for microbiome studies. This stabilization and collection device allows for ambient temperature storage, automation, and ease of shipping/transfer of samples. The device permitted the same data reproducibility as with frozen samples, and yielded higher recovery of nucleic acids. Collection and stabilization of stool microbiome samples with the DNA Genotek collection device, combined with our extraction and WGS, provides a robust, reproducible workflow that enables standardized global collection, storage, and analysis of stool for microbiome studies. PMID:27558918

  1. Digital Assays Part I: Partitioning Statistics and Digital PCR.

    PubMed

    Basu, Amar S

    2017-08-01

    A digital assay is one in which the sample is partitioned into many small containers such that each partition contains a discrete number of biological entities (0, 1, 2, 3, …). A powerful technique in the biologist's toolkit, digital assays bring a new level of precision in quantifying nucleic acids, measuring proteins and their enzymatic activity, and probing single-cell genotypes and phenotypes. Part I of this review begins with the benefits and Poisson statistics of partitioning, including sources of error. The remainder focuses on digital PCR (dPCR) for quantification of nucleic acids. We discuss five commercial instruments that partition samples into physically isolated chambers (cdPCR) or droplet emulsions (ddPCR). We compare the strengths of dPCR (absolute quantitation, precision, and ability to detect rare or mutant targets) with those of its predecessor, quantitative real-time PCR (dynamic range, larger sample volumes, and throughput). Lastly, we describe several promising applications of dPCR, including copy number variation, quantitation of circulating tumor DNA and viral load, RNA/miRNA quantitation with reverse transcription dPCR, and library preparation for next-generation sequencing. This review is intended to give a broad perspective to scientists interested in adopting digital assays into their workflows. Part II focuses on digital protein and cell assays.

  2. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  3. Optimization of multiplexed PCR on an integrated microfluidic forensic platform for rapid DNA analysis.

    PubMed

    Estes, Matthew D; Yang, Jianing; Duane, Brett; Smith, Stan; Brooks, Carla; Nordquist, Alan; Zenhausern, Frederic

    2012-12-07

    This study reports the design, prototyping, and assay development of multiplexed polymerase chain reaction (PCR) on a plastic microfluidic device. Amplification of 17 DNA loci is carried out directly on-chip as part of a system for continuous workflow processing from sample preparation (SP) to capillary electrophoresis (CE). For enhanced performance of on-chip PCR amplification, improved control systems have been developed making use of customized Peltier assemblies, valve actuators, software, and amplification chemistry protocols. Multiple enhancements to the microfluidic chip design have been enacted to improve the reliability of sample delivery through the various on-chip modules. This work has been enabled by the encapsulation of PCR reagents into a solid phase material through an optimized Solid Phase Encapsulating Assay Mix (SPEAM) bead-based hydrogel fabrication process. SPEAM bead technology is reliably coupled with precise microfluidic metering and dispensing for efficient amplification and subsequent DNA short tandem repeat (STR) fragment analysis. This provides a means of on-chip reagent storage suitable for microfluidic automation, with the long shelf-life necessary for point-of-care (POC) or field deployable applications. This paper reports the first high quality 17-plex forensic STR amplification from a reference sample in a microfluidic chip with preloaded solid phase reagents, that is designed for integration with up and downstream processing.

  4. Metabolomics by Gas Chromatography-Mass Spectrometry: the combination of targeted and untargeted profiling

    PubMed Central

    Fiehn, Oliver

    2016-01-01

    Gas chromatography-mass spectrometry (GC-MS)-based metabolomics is ideal for identifying and quantitating small molecular metabolites (<650 daltons), including small acids, alcohols, hydroxyl acids, amino acids, sugars, fatty acids, sterols, catecholamines, drugs, and toxins, often using chemical derivatization to make these compounds volatile enough for gas chromatography. This unit shows that on GC-MS- based metabolomics easily allows integrating targeted assays for absolute quantification of specific metabolites with untargeted metabolomics to discover novel compounds. Complemented by database annotations using large spectral libraries and validated, standardized standard operating procedures, GC-MS can identify and semi-quantify over 200 compounds per study in human body fluids (e.g., plasma, urine or stool) samples. Deconvolution software enables detection of more than 300 additional unidentified signals that can be annotated through accurate mass instruments with appropriate data processing workflows, similar to liquid chromatography-MS untargeted profiling (LC-MS). Hence, GC-MS is a mature technology that not only uses classic detectors (‘quadrupole’) but also target mass spectrometers (‘triple quadrupole’) and accurate mass instruments (‘quadrupole-time of flight’). This unit covers the following aspects of GC-MS-based metabolomics: (i) sample preparation from mammalian samples, (ii) acquisition of data, (iii) quality control, and (iv) data processing. PMID:27038389

  5. Sperm Hy-Liter™: an effective tool for the detection of spermatozoa in sexual assault exhibits.

    PubMed

    De Moors, Anick; Georgalis, Tina; Armstrong, Gail; Modler, Jeff; Frégeau, Chantal J

    2013-05-01

    A fluorescence-based assay specifically targeting human spermatozoa was tested and optimized for best staining results using a variety of mock sexual assault samples. Swab clippings versus whole swabs were evaluated for best sample preparation and to simplify workflow (direct application versus swab extraction). The practicality and sensitivity of Sperm Hy-Liter™ was compared to our current phase contrast microscopy protocol for searching for the presence of spermatozoa. Sperm Hy-Liter™ was more sensitive than phase contrast microscopy and was able to detect spermatozoa more effectively in actual sexual assault samples (recent [N=240] or 24 years old [N=4]) containing few spermatozoa. Correlations were drawn between the Sperm Hy-Liter™ spermatozoa counts and the AmpFlSTR(®) Profiler(®) Plus male profiles generated from the sperm cell DNA fractions of semen containing swabs and swab clippings. In addition, recovered spermatozoa from Sperm Hy-Liter™-stained slides with greater than 40 spermatozoa produced full STR male profiles in 20.3% of slides tested and partial STR male profiles in 52.8% of slides tested. The adoption of Sperm Hy-Liter™ offers a means to standardize and improve the efficiency of the microscopic screening of sexual assault evidence. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Global combined precursor isotopic labeling and isobaric tagging (cPILOT) approach with selective MS(3) acquisition.

    PubMed

    Evans, Adam R; Robinson, Renã A S

    2013-11-01

    Recently, we reported a novel proteomics quantitation scheme termed "combined precursor isotopic labeling and isobaric tagging (cPILOT)" that allows for the identification and quantitation of nitrated peptides in as many as 12-16 samples in a single experiment. cPILOT offers enhanced multiplexing and posttranslational modification specificity, however excludes global quantitation for all peptides present in a mixture and underestimates reporter ion ratios similar to other isobaric tagging methods due to precursor co-isolation. Here, we present a novel chemical workflow for cPILOT that can be used for global tagging of all peptides in a mixture. Specifically, through low pH precursor dimethylation of tryptic or LysC peptides followed by high pH tandem mass tags, the same reporter ion can be used twice in a single experiment. Also, to improve triple-stage mass spectrometry (MS(3) ) data acquisition, a selective MS(3) method that focuses on product selection of the y1 fragment of lysine-terminated peptides is incorporated into the workflow. This novel cPILOT workflow has potential for global peptide quantitation that could lead to enhanced sample multiplexing and increase the number of quantifiable spectra obtained from MS(3) acquisition methods. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Proteomic analysis of Rhodotorula mucilaginosa: dealing with the issues of a non-conventional yeast.

    PubMed

    Addis, Maria Filippa; Tanca, Alessandro; Landolfo, Sara; Abbondio, Marcello; Cutzu, Raffaela; Biosa, Grazia; Pagnozzi, Daniela; Uzzau, Sergio; Mannazzu, Ilaria

    2016-08-01

    Red yeasts ascribed to the species Rhodotorula mucilaginosa are gaining increasing attention, due to their numerous biotechnological applications, spanning carotenoid production, liquid bioremediation, heavy metal biotransformation and antifungal and plant growth-promoting actions, but also for their role as opportunistic pathogens. Nevertheless, their characterization at the 'omic' level is still scarce. Here, we applied different proteomic workflows to R. mucilaginosa with the aim of assessing their potential in generating information on proteins and functions of biotechnological interest, with a particular focus on the carotenogenic pathway. After optimization of protein extraction, we tested several gel-based (including 2D-DIGE) and gel-free sample preparation techniques, followed by tandem mass spectrometry analysis. Contextually, we evaluated different bioinformatic strategies for protein identification and interpretation of the biological significance of the dataset. When 2D-DIGE analysis was applied, not all spots returned a unambiguous identification and no carotenogenic enzymes were identified, even upon the application of different database search strategies. Then, the application of shotgun proteomic workflows with varying levels of sensitivity provided a picture of the information depth that can be reached with different analytical resources, and resulted in a plethora of information on R. mucilaginosa metabolism. However, also in these cases no proteins related to the carotenogenic pathway were identified, thus indicating that further improvements in sequence databases and functional annotations are strictly needed for increasing the outcome of proteomic analysis of this and other non-conventional yeasts. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Bioprofiling of Salvia miltiorrhiza via planar chromatography linked to (bio)assays, high resolution mass spectrometry and nuclear magnetic resonance spectroscopy.

    PubMed

    Azadniya, Ebrahim; Morlock, Gertrud E

    2018-01-19

    An affordable bioanalytical workflow supports the collection of data on active ingredients, required for the understanding of health-related food, superfood and traditional medicines. Targeted effect-directed responses of single compounds in a complex sample highlight this powerful bioanalytical hyphenation of planar chromatography with (bio)assays. Among many reports about biological properties of Salvia miltiorrhiza Bunge root (Danshen) and their analytical methods, the highly efficient direct bioautography (DB) workflow has not been considered so far. There was just one TLC-acetylcholinesterase (AChE) method with a poor zone resolution apart from our two HPTLC-DB studies, however, all methods were focused on the nonpolar extracts of Danshen (tanshinones) only. The current study on HPTLC-UV/Vis/FLD-(bio)assay-HRMS, followed by streamlined scale-up to preparative layer chromatography (PLC)- 1 H-NMR, aimed at an even more streamlined, yet comprehensive bioanalytical workflow. It comprised effect-directed screening of both, its polar (containing phenolics) and nonpolar extracts (containing tanshinones) on the same HPTLC plate, the biochemical and biological profiling with four different (bio)assays and elucidation of structures of known and unidentified active compounds. The five AChE inhibitors, salvianolic acid B (SAB), lithiospermic acid (LSA) and rosmarinic acid (RA) as well as cryptotanshinone (CT) and 15,16-dihydrotanshinone I (DHTI) were confirmed, but also unidentified inhibitors were observed. In the polar extracts, SAB, LSA and RA exhibited free radical scavenging properties in the 2,2-diphenyl-1-picrylhydrazyl assay. CT, DHTI and some unidentified nonpolar compounds were found active against Gram-positive Bacillus subtilis and Gram-negative Aliivibrio fischeri (LOD 12 ng/band for CT, and 5 ng/band for DHTI). For the first time, the most multipotent unidentified active compound zone in the B. subtilis, A. fischeri and AChE fingerprints of the nonpolar Danshen extract was identified as co-eluted band of 1,2-dihydrotanshinone and methylenetanshinquinone in the ratio of 2:1. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Validation of 31 of the most commonly used immunohistochemical antibodies in cytology prepared using the Cellient(®) automated cell block system.

    PubMed

    Montgomery, Eric; Gao, Chen; de Luca, Julie; Bower, Jessie; Attwood, Kristropher; Ylagan, Lourdes

    2014-12-01

    The Cellient(®) cell block system has become available as an alternative, partially automated method to create cell blocks in cytology. We sought to show a validation method for immunohistochemical (IHC) staining on the Cellient cell block system (CCB) in comparison with the formalin fixed paraffin embedded traditional cell block (TCB). Immunohistochemical staining was performed using 31 antibodies on 38 patient samples for a total of 326 slides. Split samples were processed using both methods by following the Cellient(®) manufacturer's recommendations for the Cellient cell block (CCB) and the Histogel method for preparing the traditional cell block (TCB). Interpretation was performed by three pathologists and two cytotechnologists. Immunohistochemical stains were scored as: 0/1+ (negative) and 2/3+ (positive). Inter-rater agreement for each antibody was evaluated for CCB and TCB, as well as the intra-rater agreement between TCB and CCB between observers. Interobserver staining concordance for the TCB was obtained with statistical significance (P < 0.05) in 24 of 31 antibodies. Interobserver staining concordance for the CCB was obtained with statistical significance in 27 of 31 antibodies. Intra-observer staining concordance between TCB and CCB was obtained with statistical significance in 24 of 31 antibodies tested. In conclusions, immunohistochemical stains on cytologic specimens processed by the Cellient system are reliable and concordant with stains performed on the same split samples processed via a formalin fixed-paraffin embedded (FFPE) block. The Cellient system is a welcome adjunct to cytology work-flow by producing cell block material of sufficient quality to allow the use of routine IHC. © 2014 Wiley Periodicals, Inc.

  10. Focus: a robust workflow for one-dimensional NMR spectral analysis.

    PubMed

    Alonso, Arnald; Rodríguez, Miguel A; Vinaixa, Maria; Tortosa, Raül; Correig, Xavier; Julià, Antonio; Marsal, Sara

    2014-01-21

    One-dimensional (1)H NMR represents one of the most commonly used analytical techniques in metabolomic studies. The increase in the number of samples analyzed as well as the technical improvements involving instrumentation and spectral acquisition demand increasingly accurate and efficient high-throughput data processing workflows. We present FOCUS, an integrated and innovative methodology that provides a complete data analysis workflow for one-dimensional NMR-based metabolomics. This tool will allow users to easily obtain a NMR peak feature matrix ready for chemometric analysis as well as metabolite identification scores for each peak that greatly simplify the biological interpretation of the results. The algorithm development has been focused on solving the critical difficulties that appear at each data processing step and that can dramatically affect the quality of the results. As well as method integration, simplicity has been one of the main objectives in FOCUS development, requiring very little user input to perform accurate peak alignment, peak picking, and metabolite identification. The new spectral alignment algorithm, RUNAS, allows peak alignment with no need of a reference spectrum, and therefore, it reduces the bias introduced by other alignment approaches. Spectral alignment has been tested against previous methodologies obtaining substantial improvements in the case of moderate or highly unaligned spectra. Metabolite identification has also been significantly improved, using the positional and correlation peak patterns in contrast to a reference metabolite panel. Furthermore, the complete workflow has been tested using NMR data sets from 60 human urine samples and 120 aqueous liver extracts, reaching a successful identification of 42 metabolites from the two data sets. The open-source software implementation of this methodology is available at http://www.urr.cat/FOCUS.

  11. Data Management and Archiving - a Long Process

    NASA Astrophysics Data System (ADS)

    Gebauer, Petra; Bertelmann, Roland; Hasler, Tim; Kirchner, Ingo; Klump, Jens; Mettig, Nora; Peters-Kottig, Wolfgang; Rusch, Beate; Ulbricht, Damian

    2014-05-01

    Implementing policies for research data management to the end of data archiving at university institutions takes a long time. Even though, especially in geosciences, most of the scientists are familiar to analyze different sorts of data, to present statistical results and to write publications sometimes based on big data records, only some of them manage their data in a standardized manner. Much more often they have learned how to measure and to generate large volumes of data than to document these measurements and to preserve them for the future. Changing staff and limited funding make this work more difficult, but it is essential in a progressively developing digital and networked world. Results from the project EWIG (Translates to: Developing workflow components for long-term archiving of research data in geosciences), funded by Deutsche Forschungsgemeinschaft, will help on these theme. Together with the project partners Deutsches GeoForschungsZentrum Potsdam and Konrad-Zuse-Zentrum für Informationstechnik Berlin a workflow to transfer continuously recorded data from a meteorological city monitoring network into a long-term archive was developed. This workflow includes quality assurance of the data as well as description of metadata and using tools to prepare data packages for long term archiving. It will be an exemplary model for other institutions working with similar data. The development of this workflow is closely intertwined with the educational curriculum at the Institut für Meteorologie. Designing modules to run quality checks for meteorological time series of data measured every minute and preparing metadata are tasks in actual bachelor theses. Students will also test the usability of the generated working environment. Based on these experiences a practical guideline for integrating research data management in curricula will be one of the results of this project, for postgraduates as well as for younger students. Especially at the beginning of the scientific career it is necessary to become familiar with all issues concerning data management. The outcomes of EWIG are intended to be generic enough to be easily adopted by other institutions. University lectures in meteorology were started to teach future scientific generations right from the start how to deal with all sorts of different data in a transparent way. The progress of the project EWIG can be followed on the web via ewig.gfz-potsdam.de

  12. Peregrine

    PubMed Central

    Langevin, Stanley A.; Bent, Zachary W.; Solberg, Owen D.; Curtis, Deanna J.; Lane, Pamela D.; Williams, Kelly P.; Schoeniger, Joseph S.; Sinha, Anupama; Lane, Todd W.; Branda, Steven S.

    2013-01-01

    Use of second generation sequencing (SGS) technologies for transcriptional profiling (RNA-Seq) has revolutionized transcriptomics, enabling measurement of RNA abundances with unprecedented specificity and sensitivity and the discovery of novel RNA species. Preparation of RNA-Seq libraries requires conversion of the RNA starting material into cDNA flanked by platform-specific adaptor sequences. Each of the published methods and commercial kits currently available for RNA-Seq library preparation suffers from at least one major drawback, including long processing times, large starting material requirements, uneven coverage, loss of strand information and high cost. We report the development of a new RNA-Seq library preparation technique that produces representative, strand-specific RNA-Seq libraries from small amounts of starting material in a fast, simple and cost-effective manner. Additionally, we have developed a new quantitative PCR-based assay for precisely determining the number of PCR cycles to perform for optimal enrichment of the final library, a key step in all SGS library preparation workflows. PMID:23558773

  13. Clinical Validation and Implementation of a Targeted Next-Generation Sequencing Assay to Detect Somatic Variants in Non-Small Cell Lung, Melanoma, and Gastrointestinal Malignancies

    PubMed Central

    Fisher, Kevin E.; Zhang, Linsheng; Wang, Jason; Smith, Geoffrey H.; Newman, Scott; Schneider, Thomas M.; Pillai, Rathi N.; Kudchadkar, Ragini R.; Owonikoko, Taofeek K.; Ramalingam, Suresh S.; Lawson, David H.; Delman, Keith A.; El-Rayes, Bassel F.; Wilson, Malania M.; Sullivan, H. Clifford; Morrison, Annie S.; Balci, Serdar; Adsay, N. Volkan; Gal, Anthony A.; Sica, Gabriel L.; Saxe, Debra F.; Mann, Karen P.; Hill, Charles E.; Khuri, Fadlo R.; Rossi, Michael R.

    2017-01-01

    We tested and clinically validated a targeted next-generation sequencing (NGS) mutation panel using 80 formalin-fixed, paraffin-embedded (FFPE) tumor samples. Forty non-small cell lung carcinoma (NSCLC), 30 melanoma, and 30 gastrointestinal (12 colonic, 10 gastric, and 8 pancreatic adenocarcinoma) FFPE samples were selected from laboratory archives. After appropriate specimen and nucleic acid quality control, 80 NGS libraries were prepared using the Illumina TruSight tumor (TST) kit and sequenced on the Illumina MiSeq. Sequence alignment, variant calling, and sequencing quality control were performed using vendor software and laboratory-developed analysis workflows. TST generated ≥500× coverage for 98.4% of the 13,952 targeted bases. Reproducible and accurate variant calling was achieved at ≥5% variant allele frequency with 8 to 12 multiplexed samples per MiSeq flow cell. TST detected 112 variants overall, and confirmed all known single-nucleotide variants (n = 27), deletions (n = 5), insertions (n = 3), and multinucleotide variants (n = 3). TST detected at least one variant in 85.0% (68/80), and two or more variants in 36.2% (29/80), of samples. TP53 was the most frequently mutated gene in NSCLC (13 variants; 13/32 samples), gastrointestinal malignancies (15 variants; 13/25 samples), and overall (30 variants; 28/80 samples). BRAF mutations were most common in melanoma (nine variants; 9/23 samples). Clinically relevant NGS data can be obtained from routine clinical FFPE solid tumor specimens using TST, benchtop instruments, and vendor-supplied bioinformatics pipelines. PMID:26801070

  14. Flexible workflow sharing and execution services for e-scientists

    NASA Astrophysics Data System (ADS)

    Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely

    2013-04-01

    The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.

  15. Scientist-Centered Workflow Abstractions via Generic Actors, Workflow Templates, and Context-Awareness for Groundwater Modeling and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.

    2011-07-04

    A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adaptedmore » by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.« less

  16. Considerations for standardizing predictive molecular pathology for cancer prognosis.

    PubMed

    Fiorentino, Michelangelo; Scarpelli, Marina; Lopez-Beltran, Antonio; Cheng, Liang; Montironi, Rodolfo

    2017-01-01

    Molecular tests that were once ancillary to the core business of cyto-histopathology are becoming the most relevant workload in pathology departments after histopathology/cytopathology and before autopsies. This has resulted from innovations in molecular biology techniques, which have developed at an incredibly fast pace. Areas covered: Most of the current widely used techniques in molecular pathology such as FISH, direct sequencing, pyrosequencing, and allele-specific PCR will be replaced by massive parallel sequencing that will not be considered next generation, but rather, will be considered to be current generation sequencing. The pre-analytical steps of molecular techniques such as DNA extraction or sample preparation will be largely automated. Moreover, all the molecular pathology instruments will be part of an integrated workflow that traces the sample from extraction to the analytical steps until the results are reported; these steps will be guided by expert laboratory information systems. In situ hybridization and immunohistochemistry for quantification will be largely digitalized as much as histology will be mostly digitalized rather than viewed using microscopy. Expert commentary: This review summarizes the technical and regulatory issues concerning the standardization of molecular tests in pathology. A vision of the future perspectives of technological changes is also provided.

  17. A field-to-desktop toolchain for X-ray CT densitometry enables tree ring analysis

    PubMed Central

    De Mil, Tom; Vannoppen, Astrid; Beeckman, Hans; Van Acker, Joris; Van den Bulcke, Jan

    2016-01-01

    Background and Aims Disentangling tree growth requires more than ring width data only. Densitometry is considered a valuable proxy, yet laborious wood sample preparation and lack of dedicated software limit the widespread use of density profiling for tree ring analysis. An X-ray computed tomography-based toolchain of tree increment cores is presented, which results in profile data sets suitable for visual exploration as well as density-based pattern matching. Methods Two temperate (Quercus petraea, Fagus sylvatica) and one tropical species (Terminalia superba) were used for density profiling using an X-ray computed tomography facility with custom-made sample holders and dedicated processing software. Key Results Density-based pattern matching is developed and able to detect anomalies in ring series that can be corrected via interactive software. Conclusions A digital workflow allows generation of structure-corrected profiles of large sets of cores in a short time span that provide sufficient intra-annual density information for tree ring analysis. Furthermore, visual exploration of such data sets is of high value. The dated profiles can be used for high-resolution chronologies and also offer opportunities for fast screening of lesser studied tropical tree species. PMID:27107414

  18. Implementation of an i.v.-compounding robot in a hospital-based cancer center pharmacy.

    PubMed

    Yaniv, Angela W; Knoer, Scott J

    2013-11-15

    The implementation of a robotic device for compounding patient-specific chemotherapy doses is described, including a review of data on the robot's performance over a 13-month period. The automated system prepares individualized i.v. chemotherapy doses in a variety of infusion bags and syringes; more than 50 drugs are validated for use in the machine. The robot is programmed to recognize the physical parameters of syringes and vials and uses photographic identification, barcode identification, and gravimetric measurements to ensure that the correct ingredients are compounded and the final dose is accurate. The implementation timeline, including site preparation, logistics planning, installation, calibration, staff training, development of a pharmacy information system (PIS) interface, and validation by the state board of pharmacy, was about 10 months. In its first 13 months of operation, the robot was used to prepare 7384 medication doses; 85 doses (1.2%) found to be outside the desired accuracy range (±4%) were manually modified by pharmacy staff. Ongoing system monitoring has identified mechanical and materials-related problems including vial-recognition failures (in many instances, these issues were resolved by the system operator and robotic compounding proceeded successfully), interface issues affecting robot-PIS communication, and human errors such as the loading of an incorrect vial or bag into the machine. Through staff training, information technology improvements, and workflow adjustments, the robot's throughput has been steadily improved. An i.v.-compounding robot was successfully implemented in a cancer center pharmacy. The robot performs compounding tasks safely and accurately and has been integrated into the pharmacy's workflow.

  19. Coupling between a multi-physics workflow engine and an optimization framework

    NASA Astrophysics Data System (ADS)

    Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.

    2016-03-01

    A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.

  20. Warpgroup: increased precision of metabolomic data processing by consensus integration bound analysis

    PubMed Central

    Mahieu, Nathaniel G.; Spalding, Jonathan L.; Patti, Gary J.

    2016-01-01

    Motivation: Current informatic techniques for processing raw chromatography/mass spectrometry data break down under several common, non-ideal conditions. Importantly, hydrophilic liquid interaction chromatography (a key separation technology for metabolomics) produces data which are especially challenging to process. We identify three critical points of failure in current informatic workflows: compound specific drift, integration region variance, and naive missing value imputation. We implement the Warpgroup algorithm to address these challenges. Results: Warpgroup adds peak subregion detection, consensus integration bound detection, and intelligent missing value imputation steps to the conventional informatic workflow. When compared with the conventional workflow, Warpgroup made major improvements to the processed data. The coefficient of variation for peaks detected in replicate injections of a complex Escherichia Coli extract were halved (a reduction of 19%). Integration regions across samples were much more robust. Additionally, many signals lost by the conventional workflow were ‘rescued’ by the Warpgroup refinement, thereby resulting in greater analyte coverage in the processed data. Availability and implementation: Warpgroup is an open source R package available on GitHub at github.com/nathaniel-mahieu/warpgroup. The package includes example data and XCMS compatibility wrappers for ease of use. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: nathaniel.mahieu@wustl.edu or gjpattij@wustl.edu PMID:26424859

  1. Multiple pathogen biomarker detection using an encoded bead array in droplet PCR.

    PubMed

    Periyannan Rajeswari, Prem Kumar; Soderberg, Lovisa M; Yacoub, Alia; Leijon, Mikael; Andersson Svahn, Helene; Joensson, Haakan N

    2017-08-01

    We present a droplet PCR workflow for detection of multiple pathogen DNA biomarkers using fluorescent color-coded Luminex® beads. This strategy enables encoding of multiple singleplex droplet PCRs using a commercially available bead set of several hundred distinguishable fluorescence codes. This workflow provides scalability beyond the limited number offered by fluorescent detection probes such as TaqMan probes, commonly used in current multiplex droplet PCRs. The workflow was validated for three different Luminex bead sets coupled to target specific capture oligos to detect hybridization of three microorganisms infecting poultry: avian influenza, infectious laryngotracheitis virus and Campylobacter jejuni. In this assay, the target DNA was amplified with fluorescently labeled primers by PCR in parallel in monodisperse picoliter droplets, to avoid amplification bias. The color codes of the Luminex detection beads allowed concurrent and accurate classification of the different bead sets used in this assay. The hybridization assay detected target DNA of all three microorganisms with high specificity, from samples with average target concentration of a single DNA template molecule per droplet. This workflow demonstrates the possibility of increasing the droplet PCR assay detection panel to detect large numbers of targets in parallel, utilizing the scalability offered by the color-coded Luminex detection beads. Copyright © 2017. Published by Elsevier B.V.

  2. Developing a Workflow Composite Score to Measure Clinical Information Logistics. A Top-down Approach.

    PubMed

    Liebe, J D; Hübner, U; Straede, M C; Thye, J

    2015-01-01

    Availability and usage of individual IT applications have been studied intensively in the past years. Recently, IT support of clinical processes is attaining increasing attention. The underlying construct that describes the IT support of clinical workflows is clinical information logistics. This construct needs to be better understood, operationalised and measured. It is therefore the aim of this study to propose and develop a workflow composite score (WCS) for measuring clinical information logistics and to examine its quality based on reliability and validity analyses. We largely followed the procedural model of MacKenzie and colleagues (2011) for defining and conceptualising the construct domain, for developing the measurement instrument, assessing the content validity, pretesting the instrument, specifying the model, capturing the data and computing the WCS and testing the reliability and validity. Clinical information logistics was decomposed into the descriptors data and information, function, integration and distribution, which embraced the framework validated by an analysis of the international literature. This framework was refined selecting representative clinical processes. We chose ward rounds, pre- and post-surgery processes and discharge as sample processes that served as concrete instances for the measurements. They are sufficiently complex, represent core clinical processes and involve different professions, departments and settings. The score was computed on the basis of data from 183 hospitals of different size, ownership, location and teaching status. Testing the reliability and validity yielded encouraging results: the reliability was high with r(split-half) = 0.89, the WCS discriminated between groups; the WCS correlated significantly and moderately with two EHR models and the WCS received good evaluation results by a sample of chief information officers (n = 67). These findings suggest the further utilisation of the WCS. As the WCS does not assume ideal workflows as a gold standard but measures IT support of clinical workflows according to validated descriptors a high portability of the WCS to other hospitals in other countries is very likely. The WCS will contribute to a better understanding of the construct clinical information logistics.

  3. SMITH: a LIMS for handling next-generation sequencing workflows

    PubMed Central

    2014-01-01

    Background Life-science laboratories make increasing use of Next Generation Sequencing (NGS) for studying bio-macromolecules and their interactions. Array-based methods for measuring gene expression or protein-DNA interactions are being replaced by RNA-Seq and ChIP-Seq. Sequencing is generally performed by specialized facilities that have to keep track of sequencing requests, trace samples, ensure quality and make data available according to predefined privileges. An integrated tool helps to troubleshoot problems, to maintain a high quality standard, to reduce time and costs. Commercial and non-commercial tools called LIMS (Laboratory Information Management Systems) are available for this purpose. However, they often come at prohibitive cost and/or lack the flexibility and scalability needed to adjust seamlessly to the frequently changing protocols employed. In order to manage the flow of sequencing data produced at the Genomic Unit of the Italian Institute of Technology (IIT), we developed SMITH (Sequencing Machine Information Tracking and Handling). Methods SMITH is a web application with a MySQL server at the backend. Wet-lab scientists of the Centre for Genomic Science and database experts from the Politecnico of Milan in the context of a Genomic Data Model Project developed SMITH. The data base schema stores all the information of an NGS experiment, including the descriptions of all protocols and algorithms used in the process. Notably, an attribute-value table allows associating an unconstrained textual description to each sample and all the data produced afterwards. This method permits the creation of metadata that can be used to search the database for specific files as well as for statistical analyses. Results SMITH runs automatically and limits direct human interaction mainly to administrative tasks. SMITH data-delivery procedures were standardized making it easier for biologists and analysts to navigate the data. Automation also helps saving time. The workflows are available through an API provided by the workflow management system. The parameters and input data are passed to the workflow engine that performs de-multiplexing, quality control, alignments, etc. Conclusions SMITH standardizes, automates, and speeds up sequencing workflows. Annotation of data with key-value pairs facilitates meta-analysis. PMID:25471934

  4. SMITH: a LIMS for handling next-generation sequencing workflows.

    PubMed

    Venco, Francesco; Vaskin, Yuriy; Ceol, Arnaud; Muller, Heiko

    2014-01-01

    Life-science laboratories make increasing use of Next Generation Sequencing (NGS) for studying bio-macromolecules and their interactions. Array-based methods for measuring gene expression or protein-DNA interactions are being replaced by RNA-Seq and ChIP-Seq. Sequencing is generally performed by specialized facilities that have to keep track of sequencing requests, trace samples, ensure quality and make data available according to predefined privileges. An integrated tool helps to troubleshoot problems, to maintain a high quality standard, to reduce time and costs. Commercial and non-commercial tools called LIMS (Laboratory Information Management Systems) are available for this purpose. However, they often come at prohibitive cost and/or lack the flexibility and scalability needed to adjust seamlessly to the frequently changing protocols employed. In order to manage the flow of sequencing data produced at the Genomic Unit of the Italian Institute of Technology (IIT), we developed SMITH (Sequencing Machine Information Tracking and Handling). SMITH is a web application with a MySQL server at the backend. Wet-lab scientists of the Centre for Genomic Science and database experts from the Politecnico of Milan in the context of a Genomic Data Model Project developed SMITH. The data base schema stores all the information of an NGS experiment, including the descriptions of all protocols and algorithms used in the process. Notably, an attribute-value table allows associating an unconstrained textual description to each sample and all the data produced afterwards. This method permits the creation of metadata that can be used to search the database for specific files as well as for statistical analyses. SMITH runs automatically and limits direct human interaction mainly to administrative tasks. SMITH data-delivery procedures were standardized making it easier for biologists and analysts to navigate the data. Automation also helps saving time. The workflows are available through an API provided by the workflow management system. The parameters and input data are passed to the workflow engine that performs de-multiplexing, quality control, alignments, etc. SMITH standardizes, automates, and speeds up sequencing workflows. Annotation of data with key-value pairs facilitates meta-analysis.

  5. High Resolution Melting (HRM) for High-Throughput Genotyping-Limitations and Caveats in Practical Case Studies.

    PubMed

    Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz; Strapagiel, Dominik

    2017-11-03

    High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup.

  6. High Resolution Melting (HRM) for High-Throughput Genotyping—Limitations and Caveats in Practical Case Studies

    PubMed Central

    Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz

    2017-01-01

    High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup. PMID:29099791

  7. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    PubMed

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.

  8. University of TX Bureau of Economic Geology's Core Research Centers: The Time is Right for Registering Physical Samples and Assigning IGSN's - Workflows, Stumbling Blocks, and Successes.

    NASA Astrophysics Data System (ADS)

    Averett, A.; DeJarnett, B. B.

    2016-12-01

    The University Of Texas Bureau Of Economic Geology (BEG) serves as the geological survey for Texas and operates three geological sample repositories that house well over 2 million boxes of geological samples (cores and cuttings) and an abundant amount of geoscience data (geophysical logs, thin sections, geochemical analyses, etc.). Material is accessible and searchable online, and it is publically available to the geological community for research and education. Patrons access information about our collection by using our online core and log database (SQL format). BEG is currently undertaking a large project to: 1) improve the internal accuracy of metadata associated with the collection; 2) enhance the capabilities of the database for both BEG curators and researchers as well as our external patrons; and 3) ensure easy and efficient navigation for patrons through our online portal. As BEG undertakes this project, BEG is in the early stages of planning to export the metadata for its collection into SESAR (System for Earth Sample Registration) and have IGSN's (International GeoSample Numbers) assigned to its samples. Education regarding the value of IGSN's and an external registry (SESAR) has been crucial to receiving management support for the project because the concept and potential benefits of registering samples in a registry outside of the institution were not well-known prior to this project. Potential benefits such as increases in discoverability, repository recognition in publications, and interoperability were presented. The project was well-received by management, and BEG fully supports the effort to register our physical samples with SESAR. Since BEG is only in the initial phase of this project, any stumbling blocks, workflow issues, successes/failures, etc. can only be predicted at this point, but by mid-December, BEG expects to have several concrete issues to present in the session. Currently, our most pressing issue involves establishing the most efficient workflow for exporting of large amounts of metadata in a format that SESAR can easily ingest, and how this can be best accomplished with very few BEG staff assigned to the project.

  9. Autonomous Metabolomics for Rapid Metabolite Identification in Global Profiling

    DOE PAGES

    Benton, H. Paul; Ivanisevic, Julijana; Mahieu, Nathaniel G.; ...

    2014-12-12

    An autonomous metabolomic workflow combining mass spectrometry analysis with tandem mass spectrometry data acquisition was designed to allow for simultaneous data processing and metabolite characterization. Although previously tandem mass spectrometry data have been generated on the fly, the experiments described herein combine this technology with the bioinformatic resources of XCMS and METLIN. We can analyze large profiling datasets and simultaneously obtain structural identifications, as a result of this unique integration. Furthermore, validation of the workflow on bacterial samples allowed the profiling on the order of a thousand metabolite features with simultaneous tandem mass spectra data acquisition. The tandem mass spectrometrymore » data acquisition enabled automatic search and matching against the METLIN tandem mass spectrometry database, shortening the current workflow from days to hours. Overall, the autonomous approach to untargeted metabolomics provides an efficient means of metabolomic profiling, and will ultimately allow the more rapid integration of comparative analyses, metabolite identification, and data analysis at a systems biology level.« less

  10. Leaf LIMS: A Flexible Laboratory Information Management System with a Synthetic Biology Focus.

    PubMed

    Craig, Thomas; Holland, Richard; D'Amore, Rosalinda; Johnson, James R; McCue, Hannah V; West, Anthony; Zulkower, Valentin; Tekotte, Hille; Cai, Yizhi; Swan, Daniel; Davey, Robert P; Hertz-Fowler, Christiane; Hall, Anthony; Caddick, Mark

    2017-12-15

    This paper presents Leaf LIMS, a flexible laboratory information management system (LIMS) designed to address the complexity of synthetic biology workflows. At the project's inception there was a lack of a LIMS designed specifically to address synthetic biology processes, with most systems focused on either next generation sequencing or biobanks and clinical sample handling. Leaf LIMS implements integrated project, item, and laboratory stock tracking, offering complete sample and construct genealogy, materials and lot tracking, and modular assay data capture. Hence, it enables highly configurable task-based workflows and supports data capture from project inception to completion. As such, in addition to it supporting synthetic biology it is ideal for many laboratory environments with multiple projects and users. The system is deployed as a web application through Docker and is provided under a permissive MIT license. It is freely available for download at https://leaflims.github.io .

  11. Knowledge Data Base for Amorphous Metals

    DTIC Science & Technology

    2007-07-26

    not programmatic, updates. Over 100 custom SQL statements that maintain the domain specific data are attached to the workflow entries in a generic...for the form by populating the SQL and run generation tables. Application data may be prepared in different ways for two steps that invoke the same form...run generation mode). There is a single table of SQL commands. Each record has a user-definable ID, the SQL code, and a comment. The run generation

  12. Selective Capture of Histidine-tagged Proteins from Cell Lysates Using TEM grids Modified with NTA-Graphene Oxide

    NASA Astrophysics Data System (ADS)

    Benjamin, Christopher J.; Wright, Kyle J.; Bolton, Scott C.; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L.; Jiang, Wen; Thompson, David H.

    2016-10-01

    We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with Nα, Nα-dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His6-T7 bacteriophage and His6-GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His6-GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.

  13. Selective Capture of Histidine-tagged Proteins from Cell Lysates Using TEM grids Modified with NTA-Graphene Oxide.

    PubMed

    Benjamin, Christopher J; Wright, Kyle J; Bolton, Scott C; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L; Jiang, Wen; Thompson, David H

    2016-10-17

    We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with N α , N α -dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His 6 -T7 bacteriophage and His 6 -GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His 6 -GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.

  14. High-Content Analysis of CRISPR-Cas9 Gene-Edited Human Embryonic Stem Cells.

    PubMed

    Carlson-Stevermer, Jared; Goedland, Madelyn; Steyer, Benjamin; Movaghar, Arezoo; Lou, Meng; Kohlenberg, Lucille; Prestil, Ryan; Saha, Krishanu

    2016-01-12

    CRISPR-Cas9 gene editing of human cells and tissues holds much promise to advance medicine and biology, but standard editing methods require weeks to months of reagent preparation and selection where much or all of the initial edited samples are destroyed during analysis. ArrayEdit, a simple approach utilizing surface-modified multiwell plates containing one-pot transcribed single-guide RNAs, separates thousands of edited cell populations for automated, live, high-content imaging and analysis. The approach lowers the time and cost of gene editing and produces edited human embryonic stem cells at high efficiencies. Edited genes can be expressed in both pluripotent stem cells and differentiated cells. This preclinical platform adds important capabilities to observe editing and selection in situ within complex structures generated by human cells, ultimately enabling optical and other molecular perturbations in the editing workflow that could refine the specificity and versatility of gene editing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Using Isotope Ratio Infrared Spectrometer to determine δ13C and δ18O of carbonate samples

    NASA Astrophysics Data System (ADS)

    Smajgl, Danijela; Stöbener, Nils; Mandic, Magda

    2017-04-01

    The isotopic composition of calcifying organisms is a key tool for reconstruction past seawater temperature and water chemistry. Therefore stable carbon and oxygen isotopes (δ13C and δ18O) in carbonates have been widely used for reconstruction of paleoenvironments. Precise and accurate determination of isotopic composition of carbon (13C) and oxygen (18O) from carbonate sample with proper referencing and data evaluation algorithm presents a challenge for scientists. Mass spectrometry was the only widely used technique for this kind of analysis, but recent advances make laser based spectroscopy a viable alternative. The Thermo Scientific Delta Ray Isotope Ratio Infrared Spectrometer (IRIS) analyzer with the Universal Reference Interface (URI) Connect is one of those alternatives and with TELEDYNE Cetac ASX-7100 autosampler extends the traditional offerings with a system of high precision and throughput of samples. To establish precision and accuracy of measurements and also to develop optimal sample preparation method for measurements with Delta Ray IRIS and URI Connect, IAEA reference materials were used. Preparation is similar to a Gas Bench II method. Carbonate material is added into the vials, flushed with CO2 free synthetic air and acidified with few droplets of 104% H3PO4. Sample amount used for analysis can be as low as 200 μg. Samples are measured after acidification and equilibration time of one hour at 70°C. The CO2 gas generated by reaction is flushed into the variable volume inside the URI Connect through the Nafion based built-in water trap. For this step, carrier gas (CO2 free air) is used to flush the gas from the vial into the variable volume with a maximum volume of 100 ml. A small amount of the sample is then used for automatic concentration determination present in the variable volume. The Thermo Scientific Qtegra Software automatically adjusts any additional dilution of the sample to achieve the desired concentration (usually 400 ppm) in the analyzer. As part of the workflow, reference gas measurements are regularly measured at the same concentration as the sample to allow for automatic drift and linearity correction. With described sample preparation and measurement method, samples are measured with standard deviation less than 0.1‰ δ13C and δ18O, respectively and accuracy of <0.01‰. The system can measure up to 100 samples per day. Equivalent of about 80 µg of pure CO2 gas is needed to complete an analysis. Due to it's small weight and robustness, sample analysis can be performed in the field. Applying new technology of Isotope Ratio Infrared Spectrometers in environmental and paleoenvironmental research can extend the knowledge of complex seawater history and CO2 cycle.

  16. Inferring Clinical Workflow Efficiency via Electronic Medical Record Utilization

    PubMed Central

    Chen, You; Xie, Wei; Gunter, Carl A; Liebovitz, David; Mehrotra, Sanjay; Zhang, He; Malin, Bradley

    2015-01-01

    Complexity in clinical workflows can lead to inefficiency in making diagnoses, ineffectiveness of treatment plans and uninformed management of healthcare organizations (HCOs). Traditional strategies to manage workflow complexity are based on measuring the gaps between workflows defined by HCO administrators and the actual processes followed by staff in the clinic. However, existing methods tend to neglect the influences of EMR systems on the utilization of workflows, which could be leveraged to optimize workflows facilitated through the EMR. In this paper, we introduce a framework to infer clinical workflows through the utilization of an EMR and show how such workflows roughly partition into four types according to their efficiency. Our framework infers workflows at several levels of granularity through data mining technologies. We study four months of EMR event logs from a large medical center, including 16,569 inpatient stays, and illustrate that over approximately 95% of workflows are efficient and that 80% of patients are on such workflows. At the same time, we show that the remaining 5% of workflows may be inefficient due to a variety of factors, such as complex patients. PMID:26958173

  17. UPLC-MS/MS determination of ptaquiloside and pterosin B in preserved natural water.

    PubMed

    Clauson-Kaas, Frederik; Hansen, Hans Christian Bruun; Strobel, Bjarne W

    2016-11-01

    The naturally occurring carcinogen ptaquiloside and its degradation product pterosin B are found in water leaching from bracken stands. The objective of this work is to present a new sample preservation method and a fast UPLC-MS/MS method for quantification of ptaquiloside and pterosin B in environmental water samples, employing a novel internal standard. A faster, reliable, and efficient method was developed for isolation of high purity ptaquiloside and pterosin B from plant material for use as analytical standards, with purity verified by 1 H-NMR. The chemical analysis was performed by cleanup and preconcentration of samples with solid phase extraction, before analyte quantification with UPLC-MS/MS. By including gradient elution and optimizing the liquid chromatography mobile phase buffer system, a total run cycle of 5 min was achieved, with method detection limits, including preconcentration, of 8 and 4 ng/L for ptaquiloside and pterosin B, respectively. The use of loganin as internal standard improved repeatability of the determination of both analytes, though it could not be employed for sample preparation. Buffering raw water samples in situ with ammonium acetate to pH ∼5.5 decisively increased sample integrity at realistic transportation and storing conditions prior to extraction. Groundwater samples collected in November 2015 at the shallow water table below a Danish bracken stand were preserved and analyzed using the above methods, and PTA concentrations of 3.8 ± 0.24 μg/L (±sd, n = 3) were found, much higher than previously reported. Graphical abstract Workflow overview of ptaquiloside determination.

  18. One Sample, One Shot - Evaluation of sample preparation protocols for the mass spectrometric proteome analysis of human bile fluid without extensive fractionation.

    PubMed

    Megger, Dominik A; Padden, Juliet; Rosowski, Kristin; Uszkoreit, Julian; Bracht, Thilo; Eisenacher, Martin; Gerges, Christian; Neuhaus, Horst; Schumacher, Brigitte; Schlaak, Jörg F; Sitek, Barbara

    2017-02-10

    The proteome analysis of bile fluid represents a promising strategy to identify biomarker candidates for various diseases of the hepatobiliary system. However, to obtain substantive results in biomarker discovery studies large patient cohorts necessarily need to be analyzed. Consequently, this would lead to an unmanageable number of samples to be analyzed if sample preparation protocols with extensive fractionation methods are applied. Hence, the performance of simple workflows allowing for "one sample, one shot" experiments have been evaluated in this study. In detail, sixteen different protocols implying modifications at the stages of desalting, delipidation, deglycosylation and tryptic digestion have been examined. Each method has been individually evaluated regarding various performance criteria and comparative analyses have been conducted to uncover possible complementarities. Here, the best performance in terms of proteome coverage has been assessed for a combination of acetone precipitation with in-gel digestion. Finally, a mapping of all obtained protein identifications with putative biomarkers for hepatocellular carcinoma (HCC) and cholangiocellular carcinoma (CCC) revealed several proteins easily detectable in bile fluid. These results can build the basis for future studies with large and well-defined patient cohorts in a more disease-related context. Human bile fluid is a proximal body fluid and supposed to be a potential source of disease markers. However, due to its biochemical composition, the proteome analysis of bile fluid still represents a challenging task and is therefore mostly conducted using extensive fractionation procedures. This in turn leads to a high number of mass spectrometric measurements for one biological sample. Considering the fact that in order to overcome the biological variability a high number of biological samples needs to be analyzed in biomarker discovery studies, this leads to the dilemma of an unmanageable number of necessary MS-based analyses. Hence, easy sample preparation protocols are demanded representing a compromise between proteome coverage and simplicity. In the presented study, such protocols have been evaluated regarding various technical criteria (e.g. identification rates, missed cleavages, chromatographic separation) uncovering the strengths and weaknesses of various methods. Furthermore, a cumulative bile proteome list has been generated that extends the current bile proteome catalog by 248 proteins. Finally, a mapping with putative biomarkers for hepatocellular carcinoma (HCC) and cholangiocellular carcinoma (CCC) derived from tissue-based studies, revealed several of these proteins being easily and reproducibly detectable in human bile. Therefore, the presented technical work represents a solid base for future disease-related studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Workflow management systems in radiology

    NASA Astrophysics Data System (ADS)

    Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim

    1998-07-01

    In a situation of shrinking health care budgets, increasing cost pressure and growing demands to increase the efficiency and the quality of medical services, health care enterprises are forced to optimize or complete re-design their processes. Although information technology is agreed to potentially contribute to cost reduction and efficiency improvement, the real success factors are the re-definition and automation of processes: Business Process Re-engineering and Workflow Management. In this paper we discuss architectures for the use of workflow management systems in radiology. We propose to move forward from information systems in radiology (RIS, PACS) to Radiology Management Systems, in which workflow functionality (process definitions and process automation) is implemented through autonomous workflow management systems (WfMS). In a workflow oriented architecture, an autonomous workflow enactment service communicates with workflow client applications via standardized interfaces. In this paper, we discuss the need for and the benefits of such an approach. The separation of workflow management system and application systems is emphasized, and the consequences that arise for the architecture of workflow oriented information systems. This includes an appropriate workflow terminology, and the definition of standard interfaces for workflow aware application systems. Workflow studies in various institutions have shown that most of the processes in radiology are well structured and suited for a workflow management approach. Numerous commercially available Workflow Management Systems (WfMS) were investigated, and some of them, which are process- oriented and application independent, appear suitable for use in radiology.

  20. Bioinformatics workflows and web services in systems biology made easy for experimentalists.

    PubMed

    Jimenez, Rafael C; Corpas, Manuel

    2013-01-01

    Workflows are useful to perform data analysis and integration in systems biology. Workflow management systems can help users create workflows without any previous knowledge in programming and web services. However the computational skills required to build such workflows are usually above the level most biological experimentalists are comfortable with. In this chapter we introduce workflow management systems that reuse existing workflows instead of creating them, making it easier for experimentalists to perform computational tasks.

  1. A Novel Workflow to Enrich and Isolate Patient-Matched EpCAMhigh and EpCAMlow/negative CTCs Enables the Comparative Characterization of the PIK3CA Status in Metastatic Breast Cancer

    PubMed Central

    Lampignano, Rita; Yang, Liwen; Neumann, Martin H. D.; Franken, André; Fehm, Tanja; Niederacher, Dieter; Neubauer, Hans

    2017-01-01

    Circulating tumor cells (CTCs), potential precursors of most epithelial solid tumors, are mainly enriched by epithelial cell adhesion molecule (EpCAM)-dependent technologies. Hence, these approaches may overlook mesenchymal CTCs, considered highly malignant. Our aim was to establish a workflow to enrich and isolate patient-matched EpCAMhigh and EpCAMlow/negative CTCs within the same blood samples, and to investigate the phosphatidylinositol 3-kinase catalytic subunit alpha (PIK3CA) mutational status within single CTCs. We sequentially processed metastatic breast cancer (MBC) blood samples via CellSearch® (EpCAM-based) and via Parsortix™ (size-based) systems. After enrichment, cells captured in Parsortix™ cassettes were stained in situ for nuclei, cytokeratins, EpCAM and CD45. Afterwards, sorted cells were isolated via CellCelector™ micromanipulator and their genomes were amplified. Lastly, PIK3CA mutational status was analyzed by combining an amplicon-based approach with Sanger sequencing. In 54% of patients′ blood samples both EpCAMhigh and EpCAMlow/negative cells were identified and successfully isolated. High genomic integrity was observed in 8% of amplified genomes of EpCAMlow/negative cells vs. 28% of EpCAMhigh cells suggesting an increased apoptosis in the first CTC-subpopulation. Furthermore, PIK3CA hotspot mutations were detected in both EpCAMhigh and EpCAMlow/negative CTCs. Our workflow is suitable for single CTC analysis, permitting—for the first time—assessment of the heterogeneity of PIK3CA mutational status within patient-matched EpCAMhigh and EpCAMlow/negative CTCs. PMID:28858218

  2. A fully actuated robotic assistant for MRI-guided prostate biopsy and brachytherapy

    NASA Astrophysics Data System (ADS)

    Li, Gang; Su, Hao; Shang, Weijian; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fischer, Gregory S.

    2013-03-01

    Intra-operative medical imaging enables incorporation of human experience and intelligence in a controlled, closed-loop fashion. Magnetic resonance imaging (MRI) is an ideal modality for surgical guidance of diagnostic and therapeutic procedures, with its ability to perform high resolution, real-time, high soft tissue contrast imaging without ionizing radiation. However, for most current image-guided approaches only static pre-operative images are accessible for guidance, which are unable to provide updated information during a surgical procedure. The high magnetic field, electrical interference, and limited access of closed-bore MRI render great challenges to developing robotic systems that can perform inside a diagnostic high-field MRI while obtaining interactively updated MR images. To overcome these limitations, we are developing a piezoelectrically actuated robotic assistant for actuated percutaneous prostate interventions under real-time MRI guidance. Utilizing a modular design, the system enables coherent and straight forward workflow for various percutaneous interventions, including prostate biopsy sampling and brachytherapy seed placement, using various needle driver configurations. The unified workflow compromises: 1) system hardware and software initialization, 2) fiducial frame registration, 3) target selection and motion planning, 4) moving to the target and performing the intervention (e.g. taking a biopsy sample) under live imaging, and 5) visualization and verification. Phantom experiments of prostate biopsy and brachytherapy were executed under MRI-guidance to evaluate the feasibility of the workflow. The robot successfully performed fully actuated biopsy sampling and delivery of simulated brachytherapy seeds under live MR imaging, as well as precise delivery of a prostate brachytherapy seed distribution with an RMS accuracy of 0.98mm.

  3. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    PubMed Central

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org. PMID:22559942

  4. MetaGenSense: A web-application for analysis and exploration of high throughput sequencing metagenomic data

    PubMed Central

    Denis, Jean-Baptiste; Vandenbogaert, Mathias; Caro, Valérie

    2016-01-01

    The detection and characterization of emerging infectious agents has been a continuing public health concern. High Throughput Sequencing (HTS) or Next-Generation Sequencing (NGS) technologies have proven to be promising approaches for efficient and unbiased detection of pathogens in complex biological samples, providing access to comprehensive analyses. As NGS approaches typically yield millions of putatively representative reads per sample, efficient data management and visualization resources have become mandatory. Most usually, those resources are implemented through a dedicated Laboratory Information Management System (LIMS), solely to provide perspective regarding the available information. We developed an easily deployable web-interface, facilitating management and bioinformatics analysis of metagenomics data-samples. It was engineered to run associated and dedicated Galaxy workflows for the detection and eventually classification of pathogens. The web application allows easy interaction with existing Galaxy metagenomic workflows, facilitates the organization, exploration and aggregation of the most relevant sample-specific sequences among millions of genomic sequences, allowing them to determine their relative abundance, and associate them to the most closely related organism or pathogen. The user-friendly Django-Based interface, associates the users’ input data and its metadata through a bio-IT provided set of resources (a Galaxy instance, and both sufficient storage and grid computing power). Galaxy is used to handle and analyze the user’s input data from loading, indexing, mapping, assembly and DB-searches. Interaction between our application and Galaxy is ensured by the BioBlend library, which gives API-based access to Galaxy’s main features. Metadata about samples, runs, as well as the workflow results are stored in the LIMS. For metagenomic classification and exploration purposes, we show, as a proof of concept, that integration of intuitive exploratory tools, like Krona for representation of taxonomic classification, can be achieved very easily. In the trend of Galaxy, the interface enables the sharing of scientific results to fellow team members. PMID:28451381

  5. MetaGenSense: A web-application for analysis and exploration of high throughput sequencing metagenomic data.

    PubMed

    Correia, Damien; Doppelt-Azeroual, Olivia; Denis, Jean-Baptiste; Vandenbogaert, Mathias; Caro, Valérie

    2015-01-01

    The detection and characterization of emerging infectious agents has been a continuing public health concern. High Throughput Sequencing (HTS) or Next-Generation Sequencing (NGS) technologies have proven to be promising approaches for efficient and unbiased detection of pathogens in complex biological samples, providing access to comprehensive analyses. As NGS approaches typically yield millions of putatively representative reads per sample, efficient data management and visualization resources have become mandatory. Most usually, those resources are implemented through a dedicated Laboratory Information Management System (LIMS), solely to provide perspective regarding the available information. We developed an easily deployable web-interface, facilitating management and bioinformatics analysis of metagenomics data-samples. It was engineered to run associated and dedicated Galaxy workflows for the detection and eventually classification of pathogens. The web application allows easy interaction with existing Galaxy metagenomic workflows, facilitates the organization, exploration and aggregation of the most relevant sample-specific sequences among millions of genomic sequences, allowing them to determine their relative abundance, and associate them to the most closely related organism or pathogen. The user-friendly Django-Based interface, associates the users' input data and its metadata through a bio-IT provided set of resources (a Galaxy instance, and both sufficient storage and grid computing power). Galaxy is used to handle and analyze the user's input data from loading, indexing, mapping, assembly and DB-searches. Interaction between our application and Galaxy is ensured by the BioBlend library, which gives API-based access to Galaxy's main features. Metadata about samples, runs, as well as the workflow results are stored in the LIMS. For metagenomic classification and exploration purposes, we show, as a proof of concept, that integration of intuitive exploratory tools, like Krona for representation of taxonomic classification, can be achieved very easily. In the trend of Galaxy, the interface enables the sharing of scientific results to fellow team members.

  6. ATAQS: A computational software tool for high throughput transition optimization and validation for selected reaction monitoring mass spectrometry

    PubMed Central

    2011-01-01

    Background Since its inception, proteomics has essentially operated in a discovery mode with the goal of identifying and quantifying the maximal number of proteins in a sample. Increasingly, proteomic measurements are also supporting hypothesis-driven studies, in which a predetermined set of proteins is consistently detected and quantified in multiple samples. Selected reaction monitoring (SRM) is a targeted mass spectrometric technique that supports the detection and quantification of specific proteins in complex samples at high sensitivity and reproducibility. Here, we describe ATAQS, an integrated software platform that supports all stages of targeted, SRM-based proteomics experiments including target selection, transition optimization and post acquisition data analysis. This software will significantly facilitate the use of targeted proteomic techniques and contribute to the generation of highly sensitive, reproducible and complete datasets that are particularly critical for the discovery and validation of targets in hypothesis-driven studies in systems biology. Result We introduce a new open source software pipeline, ATAQS (Automated and Targeted Analysis with Quantitative SRM), which consists of a number of modules that collectively support the SRM assay development workflow for targeted proteomic experiments (project management and generation of protein, peptide and transitions and the validation of peptide detection by SRM). ATAQS provides a flexible pipeline for end-users by allowing the workflow to start or end at any point of the pipeline, and for computational biologists, by enabling the easy extension of java algorithm classes for their own algorithm plug-in or connection via an external web site. This integrated system supports all steps in a SRM-based experiment and provides a user-friendly GUI that can be run by any operating system that allows the installation of the Mozilla Firefox web browser. Conclusions Targeted proteomics via SRM is a powerful new technique that enables the reproducible and accurate identification and quantification of sets of proteins of interest. ATAQS is the first open-source software that supports all steps of the targeted proteomics workflow. ATAQS also provides software API (Application Program Interface) documentation that enables the addition of new algorithms to each of the workflow steps. The software, installation guide and sample dataset can be found in http://tools.proteomecenter.org/ATAQS/ATAQS.html PMID:21414234

  7. 3D-printed Bioanalytical Devices

    PubMed Central

    Bishop, Gregory W; Satterwhite-Warden, Jennifer E; Kadimisetty, Karteek; Rusling, James F

    2016-01-01

    While 3D printing technologies first appeared in the 1980s, prohibitive costs, limited materials, and the relatively small number of commercially available printers confined applications mainly to prototyping for manufacturing purposes. As technologies, printer cost, materials, and accessibility continue to improve, 3D printing has found widespread implementation in research and development in many disciplines due to ease-of-use and relatively fast design-to-object workflow. Several 3D printing techniques have been used to prepare devices such as milli- and microfluidic flow cells for analyses of cells and biomolecules as well as interfaces that enable bioanalytical measurements using cellphones. This review focuses on preparation and applications of 3D-printed bioanalytical devices. PMID:27250897

  8. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  9. Microscopic dual-energy CT (microDECT): a flexible tool for multichannel ex vivo 3D imaging of biological specimens.

    PubMed

    Handschuh, S; Beisser, C J; Ruthensteiner, B; Metscher, B D

    2017-07-01

    Dual-energy computed tomography (DECT) uses two different x-ray energy spectra in order to differentiate between tissues, materials or elements in a single sample or patient. DECT is becoming increasingly popular in clinical imaging and preclinical in vivo imaging of small animal models, but there have been only very few reports on ex vivo DECT of biological samples at microscopic resolutions. The present study has three main aims. First, we explore the potential of microscopic DECT (microDECT) for delivering isotropic multichannel 3D images of fixed biological samples with standard commercial laboratory-based microCT setups at spatial resolutions reaching below 10 μm. Second, we aim for retaining the maximum image resolution and quality during the material decomposition. Third, we want to test the suitability for microDECT imaging of different contrast agents currently used for ex vivo staining of biological samples. To address these aims, we used microCT scans of four different samples stained with x-ray dense contrast agents. MicroDECT scans were acquired with five different commercial microCT scanners from four companies. We present a detailed description of the microDECT workflow, including sample preparation, image acquisition, image processing and postreconstruction material decomposition, which may serve as practical guide for applying microDECT. The MATLAB script (The Mathworks Inc., Natick, MA, USA) used for material decomposition (including a graphical user interface) is provided as a supplement to this paper (https://github.com/microDECT/DECTDec). In general, the presented microDECT workflow yielded satisfactory results for all tested specimens. Original scan resolutions have been mostly retained in the separate material fractions after basis material decomposition. In addition to decomposition of mineralized tissues (inherent sample contrast) and stained soft tissues, we present a case of double labelling of different soft tissues with subsequent material decomposition. We conclude that, in contrast to in vivo DECT examinations, small ex vivo specimens offer some clear advantages regarding technical parameters of the microCT setup and the use of contrast agents. These include a higher flexibility in source peak voltages and x-ray filters, a lower degree of beam hardening due to small sample size, the lack of restriction to nontoxic contrast agents and the lack of a limit in exposure time and radiation dose. We argue that microDECT, because of its flexibility combined with already established contrast agents and the vast number of currently unexploited stains, will in future represent an important technique for various applications in biological research. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  10. Pre-analytic evaluation of volumetric absorptive microsampling and integration in a mass spectrometry-based metabolomics workflow.

    PubMed

    Volani, Chiara; Caprioli, Giulia; Calderisi, Giovanni; Sigurdsson, Baldur B; Rainer, Johannes; Gentilini, Ivo; Hicks, Andrew A; Pramstaller, Peter P; Weiss, Guenter; Smarason, Sigurdur V; Paglia, Giuseppe

    2017-10-01

    Volumetric absorptive microsampling (VAMS) is a novel approach that allows single-drop (10 μL) blood collection. Integration of VAMS with mass spectrometry (MS)-based untargeted metabolomics is an attractive solution for both human and animal studies. However, to boost the use of VAMS in metabolomics, key pre-analytical questions need to be addressed. Therefore, in this work, we integrated VAMS in a MS-based untargeted metabolomics workflow and investigated pre-analytical strategies such as sample extraction procedures and metabolome stability at different storage conditions. We first evaluated the best extraction procedure for the polar metabolome and found that the highest number and amount of metabolites were recovered upon extraction with acetonitrile/water (70:30). In contrast, basic conditions (pH 9) resulted in divergent metabolite profiles mainly resulting from the extraction of intracellular metabolites originating from red blood cells. In addition, the prolonged storage of blood samples at room temperature caused significant changes in metabolome composition, but once the VAMS devices were stored at - 80 °C, the metabolome remained stable for up to 6 months. The time used for drying the sample did also affect the metabolome. In fact, some metabolites were rapidly degraded or accumulated in the sample during the first 48 h at room temperature, indicating that a longer drying step will significantly change the concentration in the sample. Graphical abstract Volumetric absorptive microsampling (VAMS) is a novel technology that allows single-drop blood collection and, in combination with mass spectrometry (MS)-based untargeted metabolomics, represents an attractive solution for both human and animal studies. In this work, we integrated VAMS in a MS-based untargeted metabolomics workflow and investigated pre-analytical strategies such as sample extraction procedures and metabolome stability at different storage conditions. The latter revealed that prolonged storage of blood samples at room temperature caused significant changes in metabolome composition, but if VAMS devices were stored at - 80 °C, the metabolome remained stable for up to 6 months.

  11. Workflow of CAD / CAM Scoliosis Brace Adjustment in Preparation Using 3D Printing.

    PubMed

    Weiss, Hans-Rudolf; Tournavitis, Nicos; Nan, Xiaofeng; Borysov, Maksym; Paul, Lothar

    2017-01-01

    High correction bracing is the most effective conservative treatment for patients with scoliosis during growth. Still today braces for the treatment of scoliosis are made by casting patients while computer aided design (CAD) and computer aided manufacturing (CAM) is available with all possibilities to standardize pattern specific brace treatment and improve wearing comfort. CAD / CAM brace production mainly relies on carving a polyurethane foam model which is the basis for vacuuming a polyethylene (PE) or polypropylene (PP) brace. Purpose of this short communication is to describe the workflow currently used and to outline future requirements with respect to 3D printing technology. Description of the steps of virtual brace adjustment as available today are content of this paper as well as an outline of the great potential there is for the future 3D printing technology. For 3D printing of scoliosis braces it is necessary to establish easy to use software plug-ins in order to allow adding 3D printing technology to the current workflow of virtual CAD / CAM brace adjustment. Textures and structures can be added to the brace models at certain well defined locations offering the potential of more wearing comfort without losing in-brace correction. Advances have to be made in the field of CAD / CAM software tools with respect to design and generation of individually structured brace models based on currently well established and standardized scoliosis brace libraries.

  12. CEREC CAD/CAM Chairside System

    PubMed Central

    SANNINO, G.; GERMANO, F.; ARCURI, L.; BIGELLI, E.; ARCURI, C.; BARLATTANI, A.

    2014-01-01

    SUMMARY Purpose. The aim of this paper was to describe the CEREC 3 chairside system, providing the clinicians a detailed analysis of the whole digital workflow. Benefits and limitations of this technology compared with the conventional prosthetic work-flow were also highlighted and discussed. Materials and methods. Clinical procedures (tooth preparation, impression taking, adhesive luting), operational components and their capabilities as well as restorative materials used with CEREC 3 chairside system were reported. Results. The CEREC system has shown many positive aspects that make easier, faster and less expensive the prosthetic workflow. The operator-dependent errors are minimized compared to the conventional prosthetic protocol. Furthermore, a better acceptance level for the impression procedure has shown by the patients. The only drawback could be the subgingival placement of the margins compared with the supra/juxta gingival margins, since more time was required for the impression taking as well as the adhesive luting phase. The biocopy project seemed to be the best tool to obtain functionalized surfaces and keep unchanged gnathological data. Material selection was related to type of restoration. Conclusions. The evidence of our clinical practice suggests that CEREC 3 chairside system allows to produce highly aesthetic and reliable restorations in a single visit, while minimizing costs and patient discomfort during prosthetic treatment. However improvements in materials and technologies are needed in order to overcome the actual drawbacks. PMID:25992260

  13. MONA – Interactive manipulation of molecule collections

    PubMed Central

    2013-01-01

    Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157

  14. Toward a geoinformatics framework for understanding the social and biophysical influences on urban nutrient pollution due to residential impervious service connectivity

    NASA Astrophysics Data System (ADS)

    Miles, B.; Band, L. E.

    2012-12-01

    Water sustainability has been recognized as a fundamental problem of science whose solution relies in part on high-performance computing. Stormwater management is a major concern of urban sustainability. Understanding interactions between urban landcover and stormwater nutrient pollution requires consideration of fine-scale residential stormwater management, which in turn requires high-resolution LIDAR and landcover data not provided through national spatial data infrastructure, as well as field observation at the household scale. The objectives of my research are twofold: (1) advance understanding of the relationship between residential stormwater management practices and the export of nutrient pollution from stormwater in urbanized ecosystems; and (2) improve the informatics workflows used in community ecohydrology modeling as applied to heterogeneous urbanized ecosystems. In support of these objectives, I present preliminary results from initial work to: (1) develop an ecohydrology workflow platform that automates data preparation while maintaining data provenance and model metadata to yield reproducible workflows and support model benchmarking; (2) perform field observation of existing patterns of residential rooftop impervious surface connectivity to stormwater networks; and (3) develop Regional Hydro-Ecological Simulation System (RHESSys) models for watersheds in Baltimore, MD (as part of the Baltimore Ecosystem Study (BES) NSF Long-Term Ecological Research (LTER) site) and Durham, NC (as part of the NSF Urban Long-Term Research Area (ULTRA) program); these models will be used to simulate nitrogen loading resulting from both baseline residential rooftop impervious connectivity and for disconnection scenarios (e.g. roof drainage to lawn v. engineered rain garden, upslope v. riparian). This research builds on work done as part of the NSF EarthCube Layered Architecture Concept Award where a RHESSys workflow is being implemented in an iRODS (integrated Rule-Oriented Data System) environment. Modeling the ecohydrology of urban ecosystems in a reliable and reproducible manner requires a flexible scientific workflow platform that allows rapid prototyping with large-scale spatial datasets and model refinement integrating expert knowledge with local datasets and household surveys.

  15. Data Integration Tool: From Permafrost Data Translation Research Tool to A Robust Research Application

    NASA Astrophysics Data System (ADS)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.

    2016-12-01

    The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.

  16. A comprehensive high-resolution mass spectrometry approach for characterization of metabolites by combination of ambient ionization, chromatography and imaging methods.

    PubMed

    Berisha, Arton; Dold, Sebastian; Guenther, Sabine; Desbenoit, Nicolas; Takats, Zoltan; Spengler, Bernhard; Römpp, Andreas

    2014-08-30

    An ideal method for bioanalytical applications would deliver spatially resolved quantitative information in real time and without sample preparation. In reality these requirements can typically not be met by a single analytical technique. Therefore, we combine different mass spectrometry approaches: chromatographic separation, ambient ionization and imaging techniques, in order to obtain comprehensive information about metabolites in complex biological samples. Samples were analyzed by laser desorption followed by electrospray ionization (LD-ESI) as an ambient ionization technique, by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging for spatial distribution analysis and by high-performance liquid chromatography/electrospray ionization mass spectrometry (HPLC/ESI-MS) for quantitation and validation of compound identification. All MS data were acquired with high mass resolution and accurate mass (using orbital trapping and ion cyclotron resonance mass spectrometers). Grape berries were analyzed and evaluated in detail, whereas wheat seeds and mouse brain tissue were analyzed in proof-of-concept experiments. In situ measurements by LD-ESI without any sample preparation allowed for fast screening of plant metabolites on the grape surface. MALDI imaging of grape cross sections at 20 µm pixel size revealed the detailed distribution of metabolites which were in accordance with their biological function. HPLC/ESI-MS was used to quantify 13 anthocyanin species as well as to separate and identify isomeric compounds. A total of 41 metabolites (amino acids, carbohydrates, anthocyanins) were identified with all three approaches. Mass accuracy for all MS measurements was better than 2 ppm (root mean square error). The combined approach provides fast screening capabilities, spatial distribution information and the possibility to quantify metabolites. Accurate mass measurements proved to be critical in order to reliably combine data from different MS techniques. Initial results on the mycotoxin deoxynivalenol (DON) in wheat seed and phospholipids in mouse brain as a model for mammalian tissue indicate a broad applicability of the presented workflow. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Solid-phase extraction of the alcohol abuse biomarker phosphatidylethanol using newly synthesized polymeric sorbent materials containing quaternary heterocyclic groups.

    PubMed

    Duarte, Mariana; Jagadeesan, Kishore Kumar; Billing, Johan; Yilmaz, Ecevit; Laurell, Thomas; Ekström, Simon

    2017-10-13

    Phosphatidylethanol (PEth) is an interesting biomarker finding increased use for detecting long term alcohol abuse with high specificity and sensitivity. Prior to detection, sample preparation is an unavoidable step in the work-flow of PEth analysis and new protocols may facilitate it. Solid-phase extraction (SPE) is a versatile sample preparation method widely spread in biomedical laboratories due to its simplicity of use and the possibility of automation. In this work, SPE was used for the first time to directly extract PEth from spiked human plasma and spiked human blood. A library of polymeric SPE materials with different surface functionalities was screened for PEth extraction in order to identify the surface characteristics that control PEth retention and recovery. The plasma samples were diluted 1:10 (v/v) in water and spiked at different concentrations ranging from 0.3 to 5μM. The library of SPE materials was then evaluated using the proposed SPE method and detection was done by LC-MS/MS. One SPE material efficiently retained and recovered PEth from spiked human plasma. With this insight, four new SPE materials were formulated and synthesized based on the surface characteristics of the best SPE material found in the first screening. These new materials were tested with spiked human blood, to better mimic a real clinical sample. All the newly synthetized materials outperformed the pre-existing commercially available materials. Recovery values for the new SPE materials were found between 29.5% and 48.6% for the extraction of PEth in spiked blood. A material based on quaternized 1-vinylimidazole with a poly(trimethylolpropane trimethacrylate) backbone was found suitable for PEth extraction in spiked blood showing the highest analyte recovery in this experiment, 48.6%±6.4%. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Support and Development of Workflow Protocols for High Throughput Single-Lap-Joint Testing-Experimental

    DTIC Science & Technology

    2013-04-01

    preparation, and presence of an overflow fillet for a high strength epoxy and ductile methacylate adhesive. A unique feature of this study was the...of expanding adhesive joint test configurations as part of the GEMS program. 15. SUBJECT TERMS single lap joint, adhesion, aluminum, epoxy ... epoxy and ductile methacylate adhesive. A unique feature of this study was the use of untrained GEMS (Gains in the Education of Mathematics and Sci

  19. Optimization of subculture and DNA extraction steps within the whole genome sequencing workflow for source tracking of Salmonella enterica and Listeria monocytogenes.

    PubMed

    Gimonet, Johan; Portmann, Anne-Catherine; Fournier, Coralie; Baert, Leen

    2018-06-16

    This work shows that an incubation time reduced to 4-5 h to prepare a culture for DNA extraction followed by an automated DNA extraction can shorten the hands-on time, the turnaround time by 30% and increase the throughput while maintaining the WGS quality assessed by high quality Single Nucleotide Polymorphism analysis. Copyright © 2018. Published by Elsevier B.V.

  20. Successful Completion of FY18/Q1 ASC L2 Milestone 6355: Electrical Analysis Calibration Workflow Capability Demonstration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copps, Kevin D.

    The Sandia Analysis Workbench (SAW) project has developed and deployed a production capability for SIERRA computational mechanics analysis workflows. However, the electrical analysis workflow capability requirements have only been demonstrated in early prototype states, with no real capability deployed for analysts’ use. This milestone aims to improve the electrical analysis workflow capability (via SAW and related tools) and deploy it for ongoing use. We propose to focus on a QASPR electrical analysis calibration workflow use case. We will include a number of new capabilities (versus today’s SAW), such as: 1) support for the XYCE code workflow component, 2) data managementmore » coupled to electrical workflow, 3) human-in-theloop workflow capability, and 4) electrical analysis workflow capability deployed on the restricted (and possibly classified) network at Sandia. While far from the complete set of capabilities required for electrical analysis workflow over the long term, this is a substantial first step toward full production support for the electrical analysts.« less

  1. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  2. Evaluation and Impact of Workflow Interruptions During Robot-assisted Surgery.

    PubMed

    Allers, Jenna C; Hussein, Ahmed A; Ahmad, Nabeeha; Cavuoto, Lora; Wing, Joseph F; Hayes, Robin M; Hinata, Nobuyuki; Bisantz, Ann M; Guru, Khurshid A

    2016-06-01

    To analyze and categorize causes for interruptions during robot-assisted surgery. We analyzed 10 robot-assisted prostatectomies that were performed by 3 surgeons from October 2014 to June 2015. Interruptions to surgery were defined in terms of duration, stage of surgery, personnel involved, reasons, and impact of the interruption on the surgical workflow. The main reasons for interruptions included the following: console surgeons switching (29%); preparation of the surgical equipment, such as cleaning or changing the camera (29%) or an instrument (27%); or when a suture, stapler, or clip was needed (12%). The most common interruption duration was 10-29 seconds (47.6%), and the least common interruption duration was greater than 90 seconds (3.6%). Additionally, about 14% of the interruptions were considered avoidable, whereas the remaining 86% of interruptions were necessary for surgery. By identifying and analyzing interruptions, we can develop evidence-based strategies to improve operating room efficiency, lower costs, and advance patient safety. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A Toolbox to Improve Algorithms for Insulin-Dosing Decision Support

    PubMed Central

    Donsa, K.; Plank, J.; Schaupp, L.; Mader, J. K.; Truskaller, T.; Tschapeller, B.; Höll, B.; Spat, S.; Pieber, T. R.

    2014-01-01

    Summary Background Standardized insulin order sets for subcutaneous basal-bolus insulin therapy are recommended by clinical guidelines for the inpatient management of diabetes. The algorithm based GlucoTab system electronically assists health care personnel by supporting clinical workflow and providing insulin-dose suggestions. Objective To develop a toolbox for improving clinical decision-support algorithms. Methods The toolbox has three main components. 1) Data preparation: Data from several heterogeneous sources is extracted, cleaned and stored in a uniform data format. 2) Simulation: The effects of algorithm modifications are estimated by simulating treatment workflows based on real data from clinical trials. 3) Analysis: Algorithm performance is measured, analyzed and simulated by using data from three clinical trials with a total of 166 patients. Results Use of the toolbox led to algorithm improvements as well as the detection of potential individualized subgroup-specific algorithms. Conclusion These results are a first step towards individualized algorithm modifications for specific patient subgroups. PMID:25024768

  4. [Implementation of modern operating room management -- experiences made at an university hospital].

    PubMed

    Hensel, M; Wauer, H; Bloch, A; Volk, T; Kox, W J; Spies, C

    2005-07-01

    Caused by structural changes in health care the general need for cost control is evident for all hospitals. As operating room is one of the most cost-intensive sectors in a hospital, optimisation of workflow processes in this area is of particular interest for health care providers. While modern operating room management is established in several clinics yet, others are less prepared for economic challenges. Therefore, the operating room statute of the Charité university hospital useful for other hospitals to develop an own concept is presented. In addition, experiences made with implementation of new management structures are described and results obtained over the last 5 years are reported. Whereas the total number of operation procedures increased by 15 %, the operating room utilization increased more markedly in terms of time and cases. Summarizing the results, central operating room management has been proved to be an effective tool to increase the efficiency of workflow processes in the operating room.

  5. Development of Chemical Isotope Labeling LC-MS for Milk Metabolomics: Comprehensive and Quantitative Profiling of the Amine/Phenol Submetabolome.

    PubMed

    Mung, Dorothea; Li, Liang

    2017-04-18

    Milk is a complex sample containing a variety of proteins, lipids, and metabolites. Studying the milk metabolome represents an important application of metabolomics in the general area of nutritional research. However, comprehensive and quantitative analysis of milk metabolites is a challenging task due to the wide range of variations in chemical/physical properties and concentrations of these metabolites. We report an analytical workflow for in-depth profiling of the milk metabolome based on chemical isotope labeling (CIL) and liquid chromatography mass spectrometry (LC-MS) with a focus of using dansylation labeling to target the amine/phenol submetabolome. An optimal sample preparation method, including the use of methanol at a 3:1 ratio of solvent to milk for protein precipitation and dichloromethane for lipid removal, was developed to detect and quantify as many metabolites as possible. This workflow was found to be generally applicable to profile milk metabolomes of different species (cow, goat, and human) and types. Results from experimental replicate analysis (n = 5) of 1:1, 2:1, and 1:2 12 C-/ 13 C-labeled cow milk samples showed that 95.7%, 94.3%, and 93.2% of peak pairs, respectively, had ratio values within ±50% accuracy range and 90.7%, 92.6%, and 90.8% peak pairs had RSD values of less than 20%. In the metabolomic analysis of 36 samples from different categories of cow milk (brands, batches, and fat percentages) with experimental triplicates, a total of 7104 peak pairs or metabolites could be detected with an average of 4573 ± 505 (n = 108) pairs detected per LC-MS run. Among them, 3820 peak pairs were commonly detected in over 80% of the samples with 70 metabolites positively identified by mass and retention time matches to the dansyl standard library and 2988 pairs with their masses matched to the human metabolome libraries. This unprecedentedly high coverage of the amine/phenol submetabolome illustrates the complexity of the milk metabolome. Since milk and milk products are consumed in large quantities on a daily basis, the intake of these milk metabolites even at low concentrations can be cumulatively high. The high-coverage analysis of the milk metabolome using CIL LC-MS should be very useful in future research involving the study of the effects of these metabolites on human health. It should also be useful in the dairy industry in areas such as improving milk production, developing new processing technologies, developing improved nutritional products, quality control, and milk product authentication.

  6. An integrated workflow for stress and flow modelling using outcrop-derived discrete fracture networks

    NASA Astrophysics Data System (ADS)

    Bisdom, K.; Nick, H. M.; Bertotti, G.

    2017-06-01

    Fluid flow in naturally fractured reservoirs is often controlled by subseismic-scale fracture networks. Although the fracture network can be partly sampled in the direct vicinity of wells, the inter-well scale network is poorly constrained in fractured reservoir models. Outcrop analogues can provide data for populating domains of the reservoir model where no direct measurements are available. However, extracting relevant statistics from large outcrops representative of inter-well scale fracture networks remains challenging. Recent advances in outcrop imaging provide high-resolution datasets that can cover areas of several hundred by several hundred meters, i.e. the domain between adjacent wells, but even then, data from the high-resolution models is often upscaled to reservoir flow grids, resulting in loss of accuracy. We present a workflow that uses photorealistic georeferenced outcrop models to construct geomechanical and fluid flow models containing thousands of discrete fractures covering sufficiently large areas, that does not require upscaling to model permeability. This workflow seamlessly integrates geomechanical Finite Element models with flow models that take into account stress-sensitive fracture permeability and matrix flow to determine the full permeability tensor. The applicability of this workflow is illustrated using an outcropping carbonate pavement in the Potiguar basin in Brazil, from which 1082 fractures are digitised. The permeability tensor for a range of matrix permeabilities shows that conventional upscaling to effective grid properties leads to potential underestimation of the true permeability and the orientation of principal permeabilities. The presented workflow yields the full permeability tensor model of discrete fracture networks with stress-induced apertures, instead of relying on effective properties as most conventional flow models do.

  7. Diagnostic procedures for non-small-cell lung cancer (NSCLC): recommendations of the European Expert Group

    PubMed Central

    Dietel, Manfred; Bubendorf, Lukas; Dingemans, Anne-Marie C; Dooms, Christophe; Elmberger, Göran; García, Rosa Calero; Kerr, Keith M; Lim, Eric; López-Ríos, Fernando; Thunnissen, Erik; Van Schil, Paul E; von Laffert, Maximilian

    2016-01-01

    Background There is currently no Europe-wide consensus on the appropriate preanalytical measures and workflow to optimise procedures for tissue-based molecular testing of non-small-cell lung cancer (NSCLC). To address this, a group of lung cancer experts (see list of authors) convened to discuss and propose standard operating procedures (SOPs) for NSCLC. Methods Based on earlier meetings and scientific expertise on lung cancer, a multidisciplinary group meeting was aligned. The aim was to include all relevant aspects concerning NSCLC diagnosis. After careful consideration, the following topics were selected and each was reviewed by the experts: surgical resection and sampling; biopsy procedures for analysis; preanalytical and other variables affecting quality of tissue; tissue conservation; testing procedures for epidermal growth factor receptor, anaplastic lymphoma kinase and ROS proto-oncogene 1, receptor tyrosine kinase (ROS1) in lung tissue and cytological specimens; as well as standardised reporting and quality control (QC). Finally, an optimal workflow was described. Results Suggested optimal procedures and workflows are discussed in detail. The broad consensus was that the complex workflow presented can only be executed effectively by an interdisciplinary approach using a well-trained team. Conclusions To optimise diagnosis and treatment of patients with NSCLC, it is essential to establish SOPs that are adaptable to the local situation. In addition, a continuous QC system and a local multidisciplinary tumour-type-oriented board are essential. PMID:26530085

  8. Quality Control of Structural MRI Images Applied Using FreeSurfer—A Hands-On Workflow to Rate Motion Artifacts

    PubMed Central

    Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.

    2016-01-01

    In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528

  9. [Measures to prevent patient identification errors in blood collection/physiological function testing utilizing a laboratory information system].

    PubMed

    Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji

    2013-08-01

    We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.

  10. From Field to the Web: Management and Publication of Geoscience Samples in CSIRO Mineral Resources

    NASA Astrophysics Data System (ADS)

    Devaraju, A.; Klump, J. F.; Tey, V.; Fraser, R.; Reid, N.; Brown, A.; Golodoniuc, P.

    2016-12-01

    Inaccessible samples are an obstacle to the reproducibility of research and may cause waste of time and resources through duplication of sample collection and management. Within the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Mineral Resources there are various research communities who collect or generate physical samples as part of their field studies and analytical processes. Materials can be varied and could be rock, soil, plant materials, water, and even synthetic materials. Given the wide range of applications in CSIRO, each researcher or project may follow their own method of collecting, curating and documenting samples. In many cases samples and their documentation are often only available to the sample collector. For example, the Australian Resources Research Centre stores rock samples and research collections dating as far back as the 1970s. Collecting these samples again would be prohibitively expensive and in some cases impossible because the site has been mined out. These samples would not be easily discoverable by others without an online sample catalog. We identify some of the organizational and technical challenges to provide unambiguous and systematic access to geoscience samples, and present their solutions (e.g., workflow, persistent identifier and tools). We present the workflow starting from field sampling to sample publication on the Web, and describe how the International Geo Sample Number (IGSN) can be applied to identify samples along the process. In our test case geoscientific samples are collected as part of the Capricorn Distal Footprints project, a collaboration project between the CSIRO, the Geological Survey of Western Australia, academic institutions and industry partners. We conclude by summarizing the values of our solutions in terms of sample management and publication.

  11. RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Chen, Nengcheng; Di, Liping

    2012-10-01

    Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.

  12. Scientific Data Management (SDM) Center for Enabling Technologies. 2007-2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludascher, Bertram; Altintas, Ilkay

    Over the past five years, our activities have both established Kepler as a viable scientific workflow environment and demonstrated its value across multiple science applications. We have published numerous peer-reviewed papers on the technologies highlighted in this short paper and have given Kepler tutorials at SC06,SC07,SC08,and SciDAC 2007. Our outreach activities have allowed scientists to learn best practices and better utilize Kepler to address their individual workflow problems. Our contributions to advancing the state-of-the-art in scientific workflows have focused on the following areas. Progress in each of these areas is described in subsequent sections. Workflow development. The development of amore » deeper understanding of scientific workflows "in the wild" and of the requirements for support tools that allow easy construction of complex scientific workflows; Generic workflow components and templates. The development of generic actors (i.e.workflow components and processes) which can be broadly applied to scientific problems; Provenance collection and analysis. The design of a flexible provenance collection and analysis infrastructure within the workflow environment; and, Workflow reliability and fault tolerance. The improvement of the reliability and fault-tolerance of workflow environments.« less

  13. Recent development in software and automation tools for high-throughput discovery bioanalysis.

    PubMed

    Shou, Wilson Z; Zhang, Jun

    2012-05-01

    Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.

  14. Multiplex PCR method for MinION and Illumina sequencing of Zika and other virus genomes directly from clinical samples.

    PubMed

    Quick, Joshua; Grubaugh, Nathan D; Pullan, Steven T; Claro, Ingra M; Smith, Andrew D; Gangavarapu, Karthik; Oliveira, Glenn; Robles-Sikisaka, Refugio; Rogers, Thomas F; Beutler, Nathan A; Burton, Dennis R; Lewis-Ximenez, Lia Laura; de Jesus, Jaqueline Goes; Giovanetti, Marta; Hill, Sarah C; Black, Allison; Bedford, Trevor; Carroll, Miles W; Nunes, Marcio; Alcantara, Luiz Carlos; Sabino, Ester C; Baylis, Sally A; Faria, Nuno R; Loose, Matthew; Simpson, Jared T; Pybus, Oliver G; Andersen, Kristian G; Loman, Nicholas J

    2017-06-01

    Genome sequencing has become a powerful tool for studying emerging infectious diseases; however, genome sequencing directly from clinical samples (i.e., without isolation and culture) remains challenging for viruses such as Zika, for which metagenomic sequencing methods may generate insufficient numbers of viral reads. Here we present a protocol for generating coding-sequence-complete genomes, comprising an online primer design tool, a novel multiplex PCR enrichment protocol, optimized library preparation methods for the portable MinION sequencer (Oxford Nanopore Technologies) and the Illumina range of instruments, and a bioinformatics pipeline for generating consensus sequences. The MinION protocol does not require an Internet connection for analysis, making it suitable for field applications with limited connectivity. Our method relies on multiplex PCR for targeted enrichment of viral genomes from samples containing as few as 50 genome copies per reaction. Viral consensus sequences can be achieved in 1-2 d by starting with clinical samples and following a simple laboratory workflow. This method has been successfully used by several groups studying Zika virus evolution and is facilitating an understanding of the spread of the virus in the Americas. The protocol can be used to sequence other viral genomes using the online Primal Scheme primer designer software. It is suitable for sequencing either RNA or DNA viruses in the field during outbreaks or as an inexpensive, convenient method for use in the lab.

  15. Development of an online SPE-UHPLC-MS/MS method for the multiresidue analysis of the 17 compounds from the EU "Watch list".

    PubMed

    Gusmaroli, Lucia; Insa, Sara; Petrovic, Mira

    2018-04-24

    During the last decades, the quality of aquatic ecosystems has been threatened by increasing levels of pollutions, caused by the discharge of man-made chemicals, both via accidental release of pollutants as well as a consequence of the constant outflow of inadequately treated wastewater effluents. For this reason, the European Union is updating its legislations with the aim of limiting the release of emerging contaminants. The Commission Implementing Decision (EU) 2015/495 published in March 2015 drafts a "Watch list" of compounds to be monitored Europe-wide. In this study, a methodology based on online solid-phase extraction (SPE) ultra-high-performance liquid chromatography coupled to a triple-quadrupole mass spectrometer (UHPLC-MS/MS) was developed for the simultaneous determination of the 17 compounds listed therein. The proposed method offers advantages over already available methods, such as versatility (all 17 compounds can be analyzed simultaneously), shorter time required for analysis, robustness, and sensitivity. The employment of online sample preparation minimized sample manipulation and reduced dramatically the sample volume needed and time required, dramatically the sample volume needed and time required, thus making the analysis fast and reliable. The method was successfully validated in surface water and influent and effluent wastewater. Limits of detection ranged from sub- to low-nanogram per liter levels, in compliance with the EU limits, with the only exception of EE2. Graphical abstract Schematic of the workflow for the analysis of the Watch list compounds.

  16. Development of an analytical method to assess the occupational health risk of therapeutic monoclonal antibodies using LC-HRMS.

    PubMed

    Reinders, Lars M H; Klassen, Martin D; Jaeger, Martin; Teutenberg, Thorsten; Tuerk, Jochen

    2018-04-01

    Monoclonal antibodies are a group of commonly used therapeutics, whose occupational health risk is still discussed controversially. The long-term low-dose exposure side effects are insufficiently evaluated; hence, discussions are often based on a theoretical level or extrapolating side effects from therapeutic dosages. While some research groups recommend applying the precautionary principle for monoclonal antibodies, others consider the exposure risk too low for measures taken towards occupational health and safety. However, both groups agree that airborne monoclonal antibodies have the biggest risk potential. Therefore, we developed a peptide-based analytical method for occupational exposure monitoring of airborne monoclonal antibodies. The method will allow collecting data about the occupational exposure to monoclonal antibodies. Thus, the mean daily intake for personnel in pharmacies and the pharmaceutical industry can be determined for the first time and will help to substantiate the risk assessment by relevant data. The introduced monitoring method includes air sampling, sample preparation and detection by liquid chromatography coupled with high-resolution mass spectrometry of individual monoclonal antibodies as well as sum parameter. For method development and validation, a chimeric (rituximab), humanised (trastuzumab) and a fully humanised (daratumumab) monoclonal antibody are used. A limit of detection between 1 μg per sample for daratumumab and 25 μg per sample for the collective peptide is achieved. Graphical abstract Demonstration of the analytical workflow, from the release of monoclonal antibodies to the detection as single substances as well as sum parameter.

  17. Peregrine: A rapid and unbiased method to produce strand-specific RNA-Seq libraries from small quantities of starting material.

    PubMed

    Langevin, Stanley A; Bent, Zachary W; Solberg, Owen D; Curtis, Deanna J; Lane, Pamela D; Williams, Kelly P; Schoeniger, Joseph S; Sinha, Anupama; Lane, Todd W; Branda, Steven S

    2013-04-01

    Use of second generation sequencing (SGS) technologies for transcriptional profiling (RNA-Seq) has revolutionized transcriptomics, enabling measurement of RNA abundances with unprecedented specificity and sensitivity and the discovery of novel RNA species. Preparation of RNA-Seq libraries requires conversion of the RNA starting material into cDNA flanked by platform-specific adaptor sequences. Each of the published methods and commercial kits currently available for RNA-Seq library preparation suffers from at least one major drawback, including long processing times, large starting material requirements, uneven coverage, loss of strand information and high cost. We report the development of a new RNA-Seq library preparation technique that produces representative, strand-specific RNA-Seq libraries from small amounts of starting material in a fast, simple and cost-effective manner. Additionally, we have developed a new quantitative PCR-based assay for precisely determining the number of PCR cycles to perform for optimal enrichment of the final library, a key step in all SGS library preparation workflows.

  18. GUEST EDITOR'S INTRODUCTION: Guest Editor's introduction

    NASA Astrophysics Data System (ADS)

    Chrysanthis, Panos K.

    1996-12-01

    Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260, USA This special issue focuses on current efforts to represent and support workflows that integrate information systems and human resources within a business or manufacturing enterprise. Workflows may also be viewed as an emerging computational paradigm for effective structuring of cooperative applications involving human users and access to diverse data types not necessarily maintained by traditional database management systems. A workflow is an automated organizational process (also called business process) which consists of a set of activities or tasks that need to be executed in a particular controlled order over a combination of heterogeneous database systems and legacy systems. Within workflows, tasks are performed cooperatively by either human or computational agents in accordance with their roles in the organizational hierarchy. The challenge in facilitating the implementation of workflows lies in developing efficient workflow management systems. A workflow management system (also called workflow server, workflow engine or workflow enactment system) provides the necessary interfaces for coordination and communication among human and computational agents to execute the tasks involved in a workflow and controls the execution orderings of tasks as well as the flow of data that these tasks manipulate. That is, the workflow management system is responsible for correctly and reliably supporting the specification, execution, and monitoring of workflows. The six papers selected (out of the twenty-seven submitted for this special issue of Distributed Systems Engineering) address different aspects of these three functional components of a workflow management system. In the first paper, `Correctness issues in workflow management', Kamath and Ramamritham discuss the important issue of correctness in workflow management that constitutes a prerequisite for the use of workflows in the automation of the critical organizational/business processes. In particular, this paper examines the issues of execution atomicity and failure atomicity, differentiating between correctness requirements of system failures and logical failures, and surveys techniques that can be used to ensure data consistency in workflow management systems. While the first paper is concerned with correctness assuming transactional workflows in which selective transactional properties are associated with individual tasks or the entire workflow, the second paper, `Scheduling workflows by enforcing intertask dependencies' by Attie et al, assumes that the tasks can be either transactions or other activities involving legacy systems. This second paper describes the modelling and specification of conditions involving events and dependencies among tasks within a workflow using temporal logic and finite state automata. It also presents a scheduling algorithm that enforces all stated dependencies by executing at any given time only those events that are allowed by all the dependency automata and in an order as specified by the dependencies. In any system with decentralized control, there is a need to effectively cope with the tension that exists between autonomy and consistency requirements. In `A three-level atomicity model for decentralized workflow management systems', Ben-Shaul and Heineman focus on the specific requirement of enforcing failure atomicity in decentralized, autonomous and interacting workflow management systems. Their paper describes a model in which each workflow manager must be able to specify the sequence of tasks that comprise an atomic unit for the purposes of correctness, and the degrees of local and global atomicity for the purpose of cooperation with other workflow managers. The paper also discusses a realization of this model in which treaties and summits provide an agreement mechanism, while underlying transaction managers are responsible for maintaining failure atomicity. The fourth and fifth papers are experience papers describing a workflow management system and a large scale workflow application, respectively. Schill and Mittasch, in `Workflow management systems on top of OSF DCE and OMG CORBA', describe a decentralized workflow management system and discuss its implementation using two standardized middleware platforms, namely, OSF DCE and OMG CORBA. The system supports a new approach to workflow management, introducing several new concepts such as data type management for integrating various types of data and quality of service for various services provided by servers. A problem common to both database applications and workflows is the handling of missing and incomplete information. This is particularly pervasive in an `electronic market' with a huge number of retail outlets producing and exchanging volumes of data, the application discussed in `Information flow in the DAMA project beyond database managers: information flow managers'. Motivated by the need for a method that allows a task to proceed in a timely manner if not all data produced by other tasks are available by its deadline, Russell et al propose an architectural framework and a language that can be used to detect, approximate and, later on, to adjust missing data if necessary. The final paper, `The evolution towards flexible workflow systems' by Nutt, is complementary to the other papers and is a survey of issues and of work related to both workflow and computer supported collaborative work (CSCW) areas. In particular, the paper provides a model and a categorization of the dimensions which workflow management and CSCW systems share. Besides summarizing the recent advancements towards efficient workflow management, the papers in this special issue suggest areas open to investigation and it is our hope that they will also provide the stimulus for further research and development in the area of workflow management systems.

  19. Transparent DNA/RNA Co-extraction Workflow Protocol Suitable for Inhibitor-Rich Environmental Samples That Focuses on Complete DNA Removal for Transcriptomic Analyses

    PubMed Central

    Lim, Natalie Y. N.; Roco, Constance A.; Frostegård, Åsa

    2016-01-01

    Adequate comparisons of DNA and cDNA libraries from complex environments require methods for co-extraction of DNA and RNA due to the inherent heterogeneity of such samples, or risk bias caused by variations in lysis and extraction efficiencies. Still, there are few methods and kits allowing simultaneous extraction of DNA and RNA from the same sample, and the existing ones generally require optimization. The proprietary nature of kit components, however, makes modifications of individual steps in the manufacturer’s recommended procedure difficult. Surprisingly, enzymatic treatments are often performed before purification procedures are complete, which we have identified here as a major problem when seeking efficient genomic DNA removal from RNA extracts. Here, we tested several DNA/RNA co-extraction commercial kits on inhibitor-rich soils, and compared them to a commonly used phenol-chloroform co-extraction method. Since none of the kits/methods co-extracted high-quality nucleic acid material, we optimized the extraction workflow by introducing small but important improvements. In particular, we illustrate the need for extensive purification prior to all enzymatic procedures, with special focus on the DNase digestion step in RNA extraction. These adjustments led to the removal of enzymatic inhibition in RNA extracts and made it possible to reduce genomic DNA to below detectable levels as determined by quantitative PCR. Notably, we confirmed that DNase digestion may not be uniform in replicate extraction reactions, thus the analysis of “representative samples” is insufficient. The modular nature of our workflow protocol allows optimization of individual steps. It also increases focus on additional purification procedures prior to enzymatic processes, in particular DNases, yielding genomic DNA-free RNA extracts suitable for metatranscriptomic analysis. PMID:27803690

  20. Classical workflow nets and workflow nets with reset arcs: using Lyapunov stability for soundness verification

    NASA Astrophysics Data System (ADS)

    Clempner, Julio B.

    2017-01-01

    This paper presents a novel analytical method for soundness verification of workflow nets and reset workflow nets, using the well-known stability results of Lyapunov for Petri nets. We also prove that the soundness property is decidable for workflow nets and reset workflow nets. In addition, we provide evidence of several outcomes related with properties such as boundedness, liveness, reversibility and blocking using stability. Our approach is validated theoretically and by a numerical example related to traffic signal-control synchronisation.

  1. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    PubMed

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  2. Biowep: a workflow enactment portal for bioinformatics applications.

    PubMed

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.

  3. Biowep: a workflow enactment portal for bioinformatics applications

    PubMed Central

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-01-01

    Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563

  4. Exploring the impact of an automated prescription-filling device on community pharmacy technician workflow.

    PubMed

    Walsh, Kristin E; Chui, Michelle Anne; Kieser, Mara A; Williams, Staci M; Sutter, Susan L; Sutter, John G

    2011-01-01

    To explore community pharmacy technician workflow change after implementation of an automated robotic prescription-filling device. At an independent community pharmacy in rural Mayville, WI, pharmacy technicians were observed before and 3 months after installation of an automated robotic prescription-filling device. The main outcome measures were sequences and timing of technician workflow steps, workflow interruptions, automation surprises, and workarounds. Of the 77 and 80 observations made before and 3 months after robot installation, respectively, 17 different workflow sequences were observed before installation and 38 after installation. Average prescription filling time was reduced by 40 seconds per prescription with use of the robot. Workflow interruptions per observation increased from 1.49 to 1.79 (P = 0.11), and workarounds increased from 10% to 36% after robot use. Although automated prescription-filling devices can increase efficiency, workflow interruptions and workarounds may negate that efficiency. Assessing changes in workflow and sequencing of tasks that may result from the use of automation can help uncover opportunities for workflow policy and procedure redesign.

  5. Ca analysis: An Excel based program for the analysis of intracellular calcium transients including multiple, simultaneous regression analysis☆

    PubMed Central

    Greensmith, David J.

    2014-01-01

    Here I present an Excel based program for the analysis of intracellular Ca transients recorded using fluorescent indicators. The program can perform all the necessary steps which convert recorded raw voltage changes into meaningful physiological information. The program performs two fundamental processes. (1) It can prepare the raw signal by several methods. (2) It can then be used to analyze the prepared data to provide information such as absolute intracellular Ca levels. Also, the rates of change of Ca can be measured using multiple, simultaneous regression analysis. I demonstrate that this program performs equally well as commercially available software, but has numerous advantages, namely creating a simplified, self-contained analysis workflow. PMID:24125908

  6. Cerec omnicam and the virtual articulator--a case report.

    PubMed

    Fritzsche, G

    2013-01-01

    This case report demonstrates how two opposing teeth were restored with full crowns using Cerec software version 4.2 (pre-release version). In addition, an anterior tooth was provided with a veneer. The situation was scanned with the Cerec Omnicam. The new virtual articulator was used for the design to obtain correct dynamic contacts. The Cerec Omnicam can scan the entire situation prior to preparation without the help of an assistant, as no surface pretreatment is necessary. The locations of the occlusal contacts can be marked with articulating paper and are indicated on the virtual models. Selective deletion of individual areas allows the prepared teeth to be rescanned, considerably speeding up the workflow. A video demonstration is available of the acquisition and design procedure.

  7. MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.

    PubMed

    Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y

    2018-01-02

    Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .

  8. Workflows and individual differences during visually guided routine tasks in a road traffic management control room.

    PubMed

    Starke, Sandra D; Baber, Chris; Cooke, Neil J; Howes, Andrew

    2017-05-01

    Road traffic control rooms rely on human operators to monitor and interact with information presented on multiple displays. Past studies have found inconsistent use of available visual information sources in such settings across different domains. In this study, we aimed to broaden the understanding of observer behaviour in control rooms by analysing a case study in road traffic control. We conducted a field study in a live road traffic control room where five operators responded to incidents while wearing a mobile eye tracker. Using qualitative and quantitative approaches, we investigated the operators' workflow using ergonomics methods and quantified visual information sampling. We found that individuals showed differing preferences for viewing modalities and weighting of task components, with a strong coupling between eye and head movement. For the quantitative analysis of the eye tracking data, we propose a number of metrics which may prove useful to compare visual sampling behaviour across domains in future. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Generic worklist handler for workflow-enabled products

    NASA Astrophysics Data System (ADS)

    Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas

    1999-07-01

    Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.

  10. Spatial cluster analysis of nanoscopically mapped serotonin receptors for classification of fixed brain tissue

    NASA Astrophysics Data System (ADS)

    Sams, Michael; Silye, Rene; Göhring, Janett; Muresan, Leila; Schilcher, Kurt; Jacak, Jaroslaw

    2014-01-01

    We present a cluster spatial analysis method using nanoscopic dSTORM images to determine changes in protein cluster distributions within brain tissue. Such methods are suitable to investigate human brain tissue and will help to achieve a deeper understanding of brain disease along with aiding drug development. Human brain tissue samples are usually treated postmortem via standard fixation protocols, which are established in clinical laboratories. Therefore, our localization microscopy-based method was adapted to characterize protein density and protein cluster localization in samples fixed using different protocols followed by common fluorescent immunohistochemistry techniques. The localization microscopy allows nanoscopic mapping of serotonin 5-HT1A receptor groups within a two-dimensional image of a brain tissue slice. These nanoscopically mapped proteins can be confined to clusters by applying the proposed statistical spatial analysis. Selected features of such clusters were subsequently used to characterize and classify the tissue. Samples were obtained from different types of patients, fixed with different preparation methods, and finally stored in a human tissue bank. To verify the proposed method, samples of a cryopreserved healthy brain have been compared with epitope-retrieved and paraffin-fixed tissues. Furthermore, samples of healthy brain tissues were compared with data obtained from patients suffering from mental illnesses (e.g., major depressive disorder). Our work demonstrates the applicability of localization microscopy and image analysis methods for comparison and classification of human brain tissues at a nanoscopic level. Furthermore, the presented workflow marks a unique technological advance in the characterization of protein distributions in brain tissue sections.

  11. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  12. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE PAGES

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...

    2016-07-21

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  13. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  14. An evolving computational platform for biological mass spectrometry: workflows, statistics and data mining with MASSyPup64.

    PubMed

    Winkler, Robert

    2015-01-01

    In biological mass spectrometry, crude instrumental data need to be converted into meaningful theoretical models. Several data processing and data evaluation steps are required to come to the final results. These operations are often difficult to reproduce, because of too specific computing platforms. This effect, known as 'workflow decay', can be diminished by using a standardized informatic infrastructure. Thus, we compiled an integrated platform, which contains ready-to-use tools and workflows for mass spectrometry data analysis. Apart from general unit operations, such as peak picking and identification of proteins and metabolites, we put a strong emphasis on the statistical validation of results and Data Mining. MASSyPup64 includes e.g., the OpenMS/TOPPAS framework, the Trans-Proteomic-Pipeline programs, the ProteoWizard tools, X!Tandem, Comet and SpiderMass. The statistical computing language R is installed with packages for MS data analyses, such as XCMS/metaXCMS and MetabR. The R package Rattle provides a user-friendly access to multiple Data Mining methods. Further, we added the non-conventional spreadsheet program teapot for editing large data sets and a command line tool for transposing large matrices. Individual programs, console commands and modules can be integrated using the Workflow Management System (WMS) taverna. We explain the useful combination of the tools by practical examples: (1) A workflow for protein identification and validation, with subsequent Association Analysis of peptides, (2) Cluster analysis and Data Mining in targeted Metabolomics, and (3) Raw data processing, Data Mining and identification of metabolites in untargeted Metabolomics. Association Analyses reveal relationships between variables across different sample sets. We present its application for finding co-occurring peptides, which can be used for target proteomics, the discovery of alternative biomarkers and protein-protein interactions. Data Mining derived models displayed a higher robustness and accuracy for classifying sample groups in targeted Metabolomics than cluster analyses. Random Forest models do not only provide predictive models, which can be deployed for new data sets, but also the variable importance. We demonstrate that the later is especially useful for tracking down significant signals and affected pathways in untargeted Metabolomics. Thus, Random Forest modeling supports the unbiased search for relevant biological features in Metabolomics. Our results clearly manifest the importance of Data Mining methods to disclose non-obvious information in biological mass spectrometry . The application of a Workflow Management System and the integration of all required programs and data in a consistent platform makes the presented data analyses strategies reproducible for non-expert users. The simple remastering process and the Open Source licenses of MASSyPup64 (http://www.bioprocess.org/massypup/) enable the continuous improvement of the system.

  15. Standardizing clinical trials workflow representation in UML for international site comparison.

    PubMed

    de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M O; Rodrigues, Maria J; Shah, Jatin; Loures, Marco R; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo

    2010-11-09

    With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.

  16. Standardizing Clinical Trials Workflow Representation in UML for International Site Comparison

    PubMed Central

    de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M. O.; Rodrigues, Maria J.; Shah, Jatin; Loures, Marco R.; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo

    2010-01-01

    Background With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Methods Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Results Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. Conclusions This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows. PMID:21085484

  17. Improving clinical laboratory efficiency: a time-motion evaluation of the Abbott m2000 RealTime and Roche COBAS AmpliPrep/COBAS TaqMan PCR systems for the simultaneous quantitation of HIV-1 RNA and HCV RNA.

    PubMed

    Amendola, Alessandra; Coen, Sabrina; Belladonna, Stefano; Pulvirenti, F Renato; Clemens, John M; Capobianchi, M Rosaria

    2011-08-01

    Diagnostic laboratories need automation that facilitates efficient processing and workflow management to meet today's challenges for expanding services and reducing cost, yet maintaining the highest levels of quality. Processing efficiency of two commercially available automated systems for quantifying HIV-1 and HCV RNA, Abbott m2000 system and Roche COBAS Ampliprep/COBAS TaqMan 96 (docked) systems (CAP/CTM), was evaluated in a mid/high throughput workflow laboratory using a representative daily workload of 24 HCV and 72 HIV samples. Three test scenarios were evaluated: A) one run with four batches on the CAP/CTM system, B) two runs on the Abbott m2000 and C) one run using the Abbott m2000 maxCycle feature (maxCycle) for co-processing these assays. Cycle times for processing, throughput and hands-on time were evaluated. Overall processing cycle time was 10.3, 9.1 and 7.6 h for Scenarios A), B) and C), respectively. Total hands-on time for each scenario was, in order, 100.0 (A), 90.3 (B) and 61.4 min (C). The interface of an automated analyzer to the laboratory workflow, notably system set up for samples and reagents and clean up functions, are as important as the automation capability of the analyzer for the overall impact to processing efficiency and operator hands-on time.

  18. A High-throughput Assay for mRNA Silencing in Primary Cortical Neurons in vitro with Oligonucleotide Therapeutics.

    PubMed

    Alterman, Julia F; Coles, Andrew H; Hall, Lauren M; Aronin, Neil; Khvorova, Anastasia; Didiot, Marie-Cécile

    2017-08-20

    Primary neurons represent an ideal cellular system for the identification of therapeutic oligonucleotides for the treatment of neurodegenerative diseases. However, due to the sensitive nature of primary cells, the transfection of small interfering RNAs (siRNA) using classical methods is laborious and often shows low efficiency. Recent progress in oligonucleotide chemistry has enabled the development of stabilized and hydrophobically modified small interfering RNAs (hsiRNAs). This new class of oligonucleotide therapeutics shows extremely efficient self-delivery properties and supports potent and durable effects in vitro and in vivo . We have developed a high-throughput in vitro assay to identify and test hsiRNAs in primary neuronal cultures. To simply, rapidly, and accurately quantify the mRNA silencing of hundreds of hsiRNAs, we use the QuantiGene 2.0 quantitative gene expression assay. This high-throughput, 96-well plate-based assay can quantify mRNA levels directly from sample lysate. Here, we describe a method to prepare short-term cultures of mouse primary cortical neurons in a 96-well plate format for high-throughput testing of oligonucleotide therapeutics. This method supports the testing of hsiRNA libraries and the identification of potential therapeutics within just two weeks. We detail methodologies of our high throughput assay workflow from primary neuron preparation to data analysis. This method can help identify oligonucleotide therapeutics for treatment of various neurological diseases.

  19. Reproducible Tissue Homogenization and Protein Extraction for Quantitative Proteomics Using MicroPestle-Assisted Pressure-Cycling Technology.

    PubMed

    Shao, Shiying; Guo, Tiannan; Gross, Vera; Lazarev, Alexander; Koh, Ching Chiek; Gillessen, Silke; Joerger, Markus; Jochum, Wolfram; Aebersold, Ruedi

    2016-06-03

    The reproducible and efficient extraction of proteins from biopsy samples for quantitative analysis is a critical step in biomarker and translational research. Recently, we described a method consisting of pressure-cycling technology (PCT) and sequential windowed acquisition of all theoretical fragment ions-mass spectrometry (SWATH-MS) for the rapid quantification of thousands of proteins from biopsy-size tissue samples. As an improvement of the method, we have incorporated the PCT-MicroPestle into the PCT-SWATH workflow. The PCT-MicroPestle is a novel, miniaturized, disposable mechanical tissue homogenizer that fits directly into the microTube sample container. We optimized the pressure-cycling conditions for tissue lysis with the PCT-MicroPestle and benchmarked the performance of the system against the conventional PCT-MicroCap method using mouse liver, heart, brain, and human kidney tissues as test samples. The data indicate that the digestion of the PCT-MicroPestle-extracted proteins yielded 20-40% more MS-ready peptide mass from all tissues tested with a comparable reproducibility when compared to the conventional PCT method. Subsequent SWATH-MS analysis identified a higher number of biologically informative proteins from a given sample. In conclusion, we have developed a new device that can be seamlessly integrated into the PCT-SWATH workflow, leading to increased sample throughput and improved reproducibility at both the protein extraction and proteomic analysis levels when applied to the quantitative proteomic analysis of biopsy-level samples.

  20. The direct analysis of drug distribution of rotigotine-loaded microspheres from tissue sections by LESA coupled with tandem mass spectrometry.

    PubMed

    Xu, Li-Xiao; Wang, Tian-Tian; Geng, Yin-Yin; Wang, Wen-Yan; Li, Yin; Duan, Xiao-Kun; Xu, Bin; Liu, Charles C; Liu, Wan-Hui

    2017-09-01

    The direct analysis of drug distribution of rotigotine-loaded microspheres (RoMS) from tissue sections by liquid extraction surface analysis (LESA) coupled with tandem mass spectrometry (MS/MS) was demonstrated. The RoMS distribution in rat tissues assessed by the ambient LESA-MS/MS approach without extensive or tedious sample pretreatment was compared with that obtained by a conventional liquid chromatography tandem mass spectrometry (LC-MS/MS) method in which organ excision and subsequent solvent extraction were commonly employed before analysis. Results obtained from the two were well correlated for a majority of the organs, such as muscle, liver, stomach, and hippocampus. The distribution of RoMS in the brain, however, was found to be mainly focused in the hippocampus and striatum regions as shown by the LESA-imaged profiles. The LESA approach we developed is sensitive enough, with an estimated LLOQ at 0.05 ng/mL of rotigotine in brain tissue, and information-rich with minimal sample preparation, suitable, and promising in assisting the development of new drug delivery systems for controlled drug release and protection. Graphical abstract Workflow for the LESA-MS/MS imaging of brain tissue section after intramuscular RoMS administration.

  1. Hekate: Software Suite for the Mass Spectrometric Analysis and Three-Dimensional Visualization of Cross-Linked Protein Samples

    PubMed Central

    2013-01-01

    Chemical cross-linking of proteins combined with mass spectrometry provides an attractive and novel method for the analysis of native protein structures and protein complexes. Analysis of the data however is complex. Only a small number of cross-linked peptides are produced during sample preparation and must be identified against a background of more abundant native peptides. To facilitate the search and identification of cross-linked peptides, we have developed a novel software suite, named Hekate. Hekate is a suite of tools that address the challenges involved in analyzing protein cross-linking experiments when combined with mass spectrometry. The software is an integrated pipeline for the automation of the data analysis workflow and provides a novel scoring system based on principles of linear peptide analysis. In addition, it provides a tool for the visualization of identified cross-links using three-dimensional models, which is particularly useful when combining chemical cross-linking with other structural techniques. Hekate was validated by the comparative analysis of cytochrome c (bovine heart) against previously reported data.1 Further validation was carried out on known structural elements of DNA polymerase III, the catalytic α-subunit of the Escherichia coli DNA replisome along with new insight into the previously uncharacterized C-terminal domain of the protein. PMID:24010795

  2. Rapid intra-operative diagnosis of kidney cancer by attenuated total reflection infrared spectroscopy of tissue smears.

    PubMed

    Pucetaite, Milda; Velicka, Martynas; Urboniene, Vidita; Ceponkus, Justinas; Bandzeviciute, Rimante; Jankevicius, Feliksas; Zelvys, Arunas; Sablinskas, Valdas; Steiner, Gerald

    2018-05-01

    Herein, a technique to analyze air-dried kidney tissue impression smears by means of attenuated total reflection infrared (ATR-IR) spectroscopy is presented. Spectral tumor markers-absorption bands of glycogen-are identified in the ATR-IR spectra of the kidney tissue smear samples. Thin kidney tissue cryo-sections currently used for IR spectroscopic analysis lack such spectral markers as the sample preparation causes irreversible molecular changes in the tissue. In particular, freeze-thaw cycle results in degradation of the glycogen and reduction or complete dissolution of its content. Supervised spectral classification was applied to the recorded spectra of the smears and the test spectra were classified with a high accuracy of 92% for normal tissue and 94% for tumor tissue, respectively. For further development, we propose that combination of the method with optical fiber ATR probes could potentially be used for rapid real-time intra-operative tissue analysis without interfering with either the established protocols of pathological examination or the ordinary workflow of operating surgeon. Such approach could ensure easier transition of the method to clinical applications where it may complement the results of gold standard histopathology examination and aid in more precise resection of kidney tumors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A field-to-desktop toolchain for X-ray CT densitometry enables tree ring analysis.

    PubMed

    De Mil, Tom; Vannoppen, Astrid; Beeckman, Hans; Van Acker, Joris; Van den Bulcke, Jan

    2016-06-01

    Disentangling tree growth requires more than ring width data only. Densitometry is considered a valuable proxy, yet laborious wood sample preparation and lack of dedicated software limit the widespread use of density profiling for tree ring analysis. An X-ray computed tomography-based toolchain of tree increment cores is presented, which results in profile data sets suitable for visual exploration as well as density-based pattern matching. Two temperate (Quercus petraea, Fagus sylvatica) and one tropical species (Terminalia superba) were used for density profiling using an X-ray computed tomography facility with custom-made sample holders and dedicated processing software. Density-based pattern matching is developed and able to detect anomalies in ring series that can be corrected via interactive software. A digital workflow allows generation of structure-corrected profiles of large sets of cores in a short time span that provide sufficient intra-annual density information for tree ring analysis. Furthermore, visual exploration of such data sets is of high value. The dated profiles can be used for high-resolution chronologies and also offer opportunities for fast screening of lesser studied tropical tree species. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Environmental DNA metabarcoding: Transforming how we survey animal and plant communities.

    PubMed

    Deiner, Kristy; Bik, Holly M; Mächler, Elvira; Seymour, Mathew; Lacoursière-Roussel, Anaïs; Altermatt, Florian; Creer, Simon; Bista, Iliana; Lodge, David M; de Vere, Natasha; Pfrender, Michael E; Bernatchez, Louis

    2017-11-01

    The genomic revolution has fundamentally changed how we survey biodiversity on earth. High-throughput sequencing ("HTS") platforms now enable the rapid sequencing of DNA from diverse kinds of environmental samples (termed "environmental DNA" or "eDNA"). Coupling HTS with our ability to associate sequences from eDNA with a taxonomic name is called "eDNA metabarcoding" and offers a powerful molecular tool capable of noninvasively surveying species richness from many ecosystems. Here, we review the use of eDNA metabarcoding for surveying animal and plant richness, and the challenges in using eDNA approaches to estimate relative abundance. We highlight eDNA applications in freshwater, marine and terrestrial environments, and in this broad context, we distill what is known about the ability of different eDNA sample types to approximate richness in space and across time. We provide guiding questions for study design and discuss the eDNA metabarcoding workflow with a focus on primers and library preparation methods. We additionally discuss important criteria for consideration of bioinformatic filtering of data sets, with recommendations for increasing transparency. Finally, looking to the future, we discuss emerging applications of eDNA metabarcoding in ecology, conservation, invasion biology, biomonitoring, and how eDNA metabarcoding can empower citizen science and biodiversity education. © 2017 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  5. Quantitative workflow based on NN for weighting criteria in landfill suitability mapping

    NASA Astrophysics Data System (ADS)

    Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Alkhasawneh, Mutasem Sh.; Aziz, Hamidi Abdul

    2017-10-01

    Our study aims to introduce a new quantitative workflow that integrates neural networks (NNs) and multi criteria decision analysis (MCDA). Existing MCDA workflows reveal a number of drawbacks, because of the reliance on human knowledge in the weighting stage. Thus, new workflow presented to form suitability maps at the regional scale for solid waste planning based on NNs. A feed-forward neural network employed in the workflow. A total of 34 criteria were pre-processed to establish the input dataset for NN modelling. The final learned network used to acquire the weights of the criteria. Accuracies of 95.2% and 93.2% achieved for the training dataset and testing dataset, respectively. The workflow was found to be capable of reducing human interference to generate highly reliable maps. The proposed workflow reveals the applicability of NN in generating landfill suitability maps and the feasibility of integrating them with existing MCDA workflows.

  6. Risk management frameworks: supporting the next generation of Murray-Darling Basin water sharing plans

    NASA Astrophysics Data System (ADS)

    Podger, G. M.; Cuddy, S. M.; Peeters, L.; Smith, T.; Bark, R. H.; Black, D. C.; Wallbrink, P.

    2014-09-01

    Water jurisdictions in Australia are required to prepare and implement water resource plans. In developing these plans the common goal is realising the best possible use of the water resources - maximising outcomes while minimising negative impacts. This requires managing the risks associated with assessing and balancing cultural, industrial, agricultural, social and environmental demands for water within a competitive and resource-limited environment. Recognising this, conformance to international risk management principles (ISO 31000:2009) have been embedded within the Murray-Darling Basin Plan. Yet, to date, there has been little strategic investment by water jurisdictions in bridging the gap between principle and practice. The ISO 31000 principles and the risk management framework that embodies them align well with an adaptive management paradigm within which to conduct water resource planning. They also provide an integrative framework for the development of workflows that link risk analysis with risk evaluation and mitigation (adaptation) scenarios, providing a transparent, repeatable and robust platform. This study, through a demonstration use case and a series of workflows, demonstrates to policy makers how these principles can be used to support the development of the next generation of water sharing plans in 2019. The workflows consider the uncertainty associated with climate and flow inputs, and model parameters on irrigation and hydropower production, meeting environmental flow objectives and recreational use of the water resource. The results provide insights to the risks associated with meeting a range of different objectives.

  7. Lowering the Barriers to Integrative Aquatic Ecosystem Science: Semantic Provenance, Open Linked Data, and Workflows

    NASA Astrophysics Data System (ADS)

    Harmon, T.; Hofmann, A. F.; Utz, R.; Deelman, E.; Hanson, P. C.; Szekely, P.; Villamizar, S. R.; Knoblock, C.; Guo, Q.; Crichton, D. J.; McCann, M. P.; Gil, Y.

    2011-12-01

    Environmental cyber-observatory (ECO) planning and implementation has been ongoing for more than a decade now, and several major efforts have recently come online or will soon. Some investigators in the relevant research communities will use ECO data, traditionally by developing their own client-side services to acquire data and then manually create custom tools to integrate and analyze it. However, a significant portion of the aquatic ecosystem science community will need more custom services to manage locally collected data. The latter group represents enormous intellectual capacity when one envisions thousands of ecosystems scientists supplementing ECO baseline data by sharing their own locally intensive observational efforts. This poster summarizes the outcomes of the June 2011 Workshop for Aquatic Ecosystem Sustainability (WAES) which focused on the needs of aquatic ecosystem research on inland waters and oceans. Here we advocate new approaches to support scientists to model, integrate, and analyze data based on: 1) a new breed of software tools in which semantic provenance is automatically created and used by the system, 2) the use of open standards based on RDF and Linked Data Principles to facilitate sharing of data and provenance annotations, 3) the use of workflows to represent explicitly all data preparation, integration, and processing steps in a way that is automatically repeatable. Aquatic ecosystems workflow exemplars are provided and discussed in terms of their potential broaden data sharing, analysis and synthesis thereby increasing the impact of aquatic ecosystem research.

  8. Analysis of Serum Total and Free PSA Using Immunoaffinity Depletion Coupled to SRM: Correlation with Clinical Immunoassay Tests

    PubMed Central

    Liu, Tao; Hossain, Mahmud; Schepmoes, Athena A.; Fillmore, Thomas L.; Sokoll, Lori J.; Kronewitter, Scott R.; Izmirlian, Grant; Shi, Tujin; Qian, Wei-Jun; Leach, Robin J.; Thompson, Ian M.; Chan, Daniel W.; Smith, Richard D.; Kagan, Jacob; Srivastava, Sudhir; Rodland, Karin D.; Camp, David G.

    2012-01-01

    Recently, selected reaction monitoring mass spectrometry (SRM-MS) has been more frequently applied to measure low abundance biomarker candidates in tissues and biofluids, owing to its high sensitivity and specificity, simplicity of assay configuration, and exceptional multiplexing capability. In this study, we report for the first time the development of immunoaffinity depletion-based workflows and SRM-MS assays that enable sensitive and accurate quantification of total and free prostate-specific antigen (PSA) in serum without the requirement for specific PSA antibodies. Low ng/mL level detection of both total and free PSA was consistently achieved in both PSA-spiked female serum samples and actual patient serum samples. Moreover, comparison of the results obtained when SRM PSA assays and conventional immunoassays were applied to the same samples showed good correlation in several independent clinical serum sample sets. These results demonstrate that the workflows and SRM assays developed here provide an attractive alternative for reliably measuring candidate biomarkers in human blood, without the need to develop affinity reagents. Furthermore, the simultaneous measurement of multiple biomarkers, including the free and bound forms of PSA, can be performed in a single multiplexed analysis using high-resolution liquid chromatographic separation coupled with SRM-MS. PMID:22846433

  9. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  10. End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas

    NASA Astrophysics Data System (ADS)

    Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.

    2017-11-01

    Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.

  11. Top ten challenges when interfacing a laboratory information system to an electronic health record: Experience at a large academic medical center.

    PubMed

    Petrides, Athena K; Tanasijevic, Milenko J; Goonan, Ellen M; Landman, Adam B; Kantartjis, Michalis; Bates, David W; Melanson, Stacy E F

    2017-10-01

    Recent U.S. government regulations incentivize implementation of an electronic health record (EHR) with computerized order entry and structured results display. Many institutions have also chosen to interface their EHR to their laboratory information system (LIS). Reported long-term benefits include increased efficiency and improved quality and safety. In order to successfully implement an interfaced EHR-LIS, institutions must plan years in advance and anticipate the impact of an integrated system. It can be challenging to fully understand the technical, workflow and resource aspects and adequately prepare for a potentially protracted system implementation and the subsequent stabilization. We describe the top ten challenges that we encountered in our clinical laboratories following the implementation of an interfaced EHR-LIS and offer suggestions on how to overcome these challenges. This study was performed at a 777-bed, tertiary care center which recently implemented an interfaced EHR-LIS. Challenges were recorded during EHR-LIS implementation and stabilization and the authors describe the top ten. Our top ten challenges were selection and harmonization of test codes, detailed training for providers on test ordering, communication with EHR provider champions during the build process, fluid orders and collections, supporting specialized workflows, sufficient reports and metrics, increased volume of inpatient venipunctures, adequate resources during stabilization, unanticipated changes to laboratory workflow and ordering specimens for anatomic pathology. A few suggestions to overcome these challenges include regular meetings with clinical champions, advanced considerations of reports and metrics that will be needed, adequate training of laboratory staff on new workflows in the EHR and defining all tests including anatomic pathology in the LIS. EHR-LIS implementations have many challenges requiring institutions to adapt and develop new infrastructures. This article should be helpful to other institutions facing or undergoing a similar endeavor. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Dawn: A Simulation Model for Evaluating Costs and Tradeoffs of Big Data Science Architectures

    NASA Astrophysics Data System (ADS)

    Cinquini, L.; Crichton, D. J.; Braverman, A. J.; Kyo, L.; Fuchs, T.; Turmon, M.

    2014-12-01

    In many scientific disciplines, scientists and data managers are bracing for an upcoming deluge of big data volumes, which will increase the size of current data archives by a factor of 10-100 times. For example, the next Climate Model Inter-comparison Project (CMIP6) will generate a global archive of model output of approximately 10-20 Peta-bytes, while the upcoming next generation of NASA decadal Earth Observing instruments are expected to collect tens of Giga-bytes/day. In radio-astronomy, the Square Kilometre Array (SKA) will collect data in the Exa-bytes/day range, of which (after reduction and processing) around 1.5 Exa-bytes/year will be stored. The effective and timely processing of these enormous data streams will require the design of new data reduction and processing algorithms, new system architectures, and new techniques for evaluating computation uncertainty. Yet at present no general software tool or framework exists that will allow system architects to model their expected data processing workflow, and determine the network, computational and storage resources needed to prepare their data for scientific analysis. In order to fill this gap, at NASA/JPL we have been developing a preliminary model named DAWN (Distributed Analytics, Workflows and Numerics) for simulating arbitrary complex workflows composed of any number of data processing and movement tasks. The model can be configured with a representation of the problem at hand (the data volumes, the processing algorithms, the available computing and network resources), and is able to evaluate tradeoffs between different possible workflows based on several estimators: overall elapsed time, separate computation and transfer times, resulting uncertainty, and others. So far, we have been applying DAWN to analyze architectural solutions for 4 different use cases from distinct science disciplines: climate science, astronomy, hydrology and a generic cloud computing use case. This talk will present preliminary results and discuss how DAWN can be evolved into a powerful tool for designing system architectures for data intensive science.

  13. Scientific Data Management (SDM) Center for Enabling Technologies. Final Report, 2007-2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludascher, Bertram; Altintas, Ilkay

    Our contributions to advancing the State of the Art in scientific workflows have focused on the following areas: Workflow development; Generic workflow components and templates; Provenance collection and analysis; and, Workflow reliability and fault tolerance.

  14. Exploring the impact of an automated prescription-filling device on community pharmacy technician workflow

    PubMed Central

    Walsh, Kristin E.; Chui, Michelle Anne; Kieser, Mara A.; Williams, Staci M.; Sutter, Susan L.; Sutter, John G.

    2012-01-01

    Objective To explore community pharmacy technician workflow change after implementation of an automated robotic prescription-filling device. Methods At an independent community pharmacy in rural Mayville, WI, pharmacy technicians were observed before and 3 months after installation of an automated robotic prescription-filling device. The main outcome measures were sequences and timing of technician workflow steps, workflow interruptions, automation surprises, and workarounds. Results Of the 77 and 80 observations made before and 3 months after robot installation, respectively, 17 different workflow sequences were observed before installation and 38 after installation. Average prescription filling time was reduced by 40 seconds per prescription with use of the robot. Workflow interruptions per observation increased from 1.49 to 1.79 (P = 0.11), and workarounds increased from 10% to 36% after robot use. Conclusion Although automated prescription-filling devices can increase efficiency, workflow interruptions and workarounds may negate that efficiency. Assessing changes in workflow and sequencing of tasks that may result from the use of automation can help uncover opportunities for workflow policy and procedure redesign. PMID:21896459

  15. An integrated workflow to assess technical and biological variability of cell population frequencies in human peripheral blood by flow cytometry

    PubMed Central

    Burel, Julie G.; Qian, Yu; Arlehamn, Cecilia Lindestam; Weiskopf, Daniela; Zapardiel-Gonzalo, Jose; Taplitz, Randy; Gilman, Robert H.; Saito, Mayuko; de Silva, Aruna D.; Vijayanand, Pandurangan; Scheuermann, Richard H.; Sette, Alessandro; Peters, Bjoern

    2016-01-01

    In the context of large-scale human system immunology studies, controlling for technical and biological variability is crucial to ensure that experimental data support research conclusions. Here, we report on a universal workflow to evaluate both technical and biological variation in multiparameter flow cytometry, applied to the development of a 10-color panel to identify all major cell populations and T cell subsets in cryopreserved PBMC. Replicate runs from a control donation and comparison of different gating strategies assessed technical variability associated with each cell population and permitted the calculation of a quality control score. Applying our panel to a large collection of PBMC samples, we found that most cell populations showed low intra-individual variability over time. In contrast, certain subpopulations such as CD56 T cells and Temra CD4 T cells were associated with high inter-individual variability. Age but not gender had a significant effect on the frequency of several populations, with a drastic decrease in naïve T cells observed in older donors. Ethnicity also influenced a significant proportion of immune cell population frequencies, emphasizing the need to account for these co-variates in immune profiling studies. Finally, we exemplify the usefulness of our workflow by identifying a novel cell-subset signature of latent tuberculosis infection. Thus, our study provides a universal workflow to establish and evaluate any flow cytometry panel in systems immunology studies. PMID:28069807

  16. An Integrated Workflow To Assess Technical and Biological Variability of Cell Population Frequencies in Human Peripheral Blood by Flow Cytometry.

    PubMed

    Burel, Julie G; Qian, Yu; Lindestam Arlehamn, Cecilia; Weiskopf, Daniela; Zapardiel-Gonzalo, Jose; Taplitz, Randy; Gilman, Robert H; Saito, Mayuko; de Silva, Aruna D; Vijayanand, Pandurangan; Scheuermann, Richard H; Sette, Alessandro; Peters, Bjoern

    2017-02-15

    In the context of large-scale human system immunology studies, controlling for technical and biological variability is crucial to ensure that experimental data support research conclusions. In this study, we report on a universal workflow to evaluate both technical and biological variation in multiparameter flow cytometry, applied to the development of a 10-color panel to identify all major cell populations and T cell subsets in cryopreserved PBMC. Replicate runs from a control donation and comparison of different gating strategies assessed the technical variability associated with each cell population and permitted the calculation of a quality control score. Applying our panel to a large collection of PBMC samples, we found that most cell populations showed low intraindividual variability over time. In contrast, certain subpopulations such as CD56 T cells and Temra CD4 T cells were associated with high interindividual variability. Age but not gender had a significant effect on the frequency of several populations, with a drastic decrease in naive T cells observed in older donors. Ethnicity also influenced a significant proportion of immune cell population frequencies, emphasizing the need to account for these covariates in immune profiling studies. We also exemplify the usefulness of our workflow by identifying a novel cell-subset signature of latent tuberculosis infection. Thus, our study provides a universal workflow to establish and evaluate any flow cytometry panel in systems immunology studies. Copyright © 2017 by The American Association of Immunologists, Inc.

  17. speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification.

    PubMed

    Beirnaert, Charlie; Meysman, Pieter; Vu, Trung Nghia; Hermans, Nina; Apers, Sandra; Pieters, Luc; Covaci, Adrian; Laukens, Kris

    2018-03-01

    Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq).

  18. speaq 2.0: A complete workflow for high-throughput 1D NMR spectra processing and quantification

    PubMed Central

    Pieters, Luc; Covaci, Adrian

    2018-01-01

    Nuclear Magnetic Resonance (NMR) spectroscopy is, together with liquid chromatography-mass spectrometry (LC-MS), the most established platform to perform metabolomics. In contrast to LC-MS however, NMR data is predominantly being processed with commercial software. Meanwhile its data processing remains tedious and dependent on user interventions. As a follow-up to speaq, a previously released workflow for NMR spectral alignment and quantitation, we present speaq 2.0. This completely revised framework to automatically analyze 1D NMR spectra uses wavelets to efficiently summarize the raw spectra with minimal information loss or user interaction. The tool offers a fast and easy workflow that starts with the common approach of peak-picking, followed by grouping, thus avoiding the binning step. This yields a matrix consisting of features, samples and peak values that can be conveniently processed either by using included multivariate statistical functions or by using many other recently developed methods for NMR data analysis. speaq 2.0 facilitates robust and high-throughput metabolomics based on 1D NMR but is also compatible with other NMR frameworks or complementary LC-MS workflows. The methods are benchmarked using a simulated dataset and two publicly available datasets. speaq 2.0 is distributed through the existing speaq R package to provide a complete solution for NMR data processing. The package and the code for the presented case studies are freely available on CRAN (https://cran.r-project.org/package=speaq) and GitHub (https://github.com/beirnaert/speaq). PMID:29494588

  19. Non-targeted workflow for identification of antimicrobial compounds in animal feed using bioassay-directed screening in combination with liquid chromatography-high resolution mass spectrometry.

    PubMed

    Wegh, Robin S; Berendsen, Bjorn J A; Driessen-Van Lankveld, Wilma D M; Pikkemaat, Mariël G; Zuidema, Tina; Van Ginkel, Leen A

    2017-11-01

    A non-targeted workflow is reported for the isolation and identification of antimicrobial active compounds using bioassay-directed screening and LC coupled to high-resolution MS. Suspect samples are extracted using a generic protocol and fractionated using two different LC conditions (A and B). The behaviour of the bioactive compound under these different conditions yields information about the physicochemical properties of the compound and introduces variations in co-eluting compounds in the fractions, which is essential for peak picking and identification. The fractions containing the active compound(s) obtained with conditions A and B are selected using a microbiological effect-based bioassay. The selected bioactive fractions from A and B are analysed using LC combined with high-resolution MS. Selection of relevant signals is automatically carried out by selecting all signals present in both bioactive fractions A and B, yielding tremendous data reduction. The method was assessed using two spiked feed samples and subsequently applied to two feed samples containing an unidentified compound showing microbial growth inhibition. In all cases, the identity of the compound causing microbiological inhibition was successfully confirmed.

  20. Research and Implementation of Key Technologies in Multi-Agent System to Support Distributed Workflow

    NASA Astrophysics Data System (ADS)

    Pan, Tianheng

    2018-01-01

    In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.

  1. Progress in digital color workflow understanding in the International Color Consortium (ICC) Workflow WG

    NASA Astrophysics Data System (ADS)

    McCarthy, Ann

    2006-01-01

    The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.

  2. Physician Information Needs and Electronic Health Records (EHRs): Time to Reengineer the Clinic Note.

    PubMed

    Koopman, Richelle J; Steege, Linsey M Barker; Moore, Joi L; Clarke, Martina A; Canfield, Shannon M; Kim, Min S; Belden, Jeffery L

    2015-01-01

    Primary care physicians face cognitive overload daily, perhaps exacerbated by the form of electronic health record documentation. We examined physician information needs to prepare for clinic visits, focusing on past clinic progress notes. This study used cognitive task analysis with 16 primary care physicians in the scenario of preparing for office visits. Physicians reviewed simulated acute and chronic care visit notes. We collected field notes and document highlighting and review, and we audio-recorded cognitive interview while on task, with subsequent thematic qualitative analysis. Member checks included the presentation of findings to the interviewed physicians and their faculty peers. The Assessment and Plan section was most important and usually reviewed first. The History of the Present Illness section could provide supporting information, especially if in narrative form. Physicians expressed frustration with the Review of Systems section, lamenting that the forces driving note construction did not match their information needs. Repetition of information contained in other parts of the chart (eg, medication lists) was identified as a source of note clutter. A workflow that included a patient summary dashboard made some elements of past notes redundant and therefore a source of clutter. Current ambulatory progress notes present more information to the physician than necessary and in an antiquated format. It is time to reengineer the clinic progress note to match the workflow and information needs of its primary consumer. © Copyright 2015 by the American Board of Family Medicine.

  3. Radiology information system: a workflow-based approach.

    PubMed

    Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; van der Aalst, W M P

    2009-09-01

    Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare.

  4. Quantitative determination of opioids in whole blood using fully automated dried blood spot desorption coupled to on-line SPE-LC-MS/MS.

    PubMed

    Verplaetse, Ruth; Henion, Jack

    2016-01-01

    Opioids are well known, widely used painkillers. Increased stability of opioids in the dried blood spot (DBS) matrix compared to blood/plasma has been described. Other benefits provided by DBS techniques include point-of-care collection, less invasive micro sampling, more economical shipment, and convenient storage. Current methodology for analysis of micro whole blood samples for opioids is limited to the classical DBS workflow, including tedious manual punching of the DBS cards followed by extraction and liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. The goal of this study was to develop and validate a fully automated on-line sample preparation procedure for the analysis of DBS micro samples relevant to the detection of opioids in finger prick blood. To this end, automated flow-through elution of DBS cards was followed by on-line solid-phase extraction (SPE) and analysis by LC-MS/MS. Selective, sensitive, accurate, and reproducible quantitation of five representative opioids in human blood at sub-therapeutic, therapeutic, and toxic levels was achieved. The range of reliable response (R(2)  ≥0.997) was 1 to 500 ng/mL whole blood for morphine, codeine, oxycodone, hydrocodone; and 0.1 to 50 ng/mL for fentanyl. Inter-day, intra-day, and matrix inter-lot accuracy and precision was less than 15% (even at lower limits of quantitation (LLOQ) level). The method was successfully used to measure hydrocodone and its major metabolite norhydrocodone in incurred human samples. Our data support the enormous potential of DBS sampling and automated analysis for monitoring opioids as well as other pharmaceuticals in both anti-doping and pain management regimens. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Information Issues and Contexts that Impair Team Based Communication Workflow: A Palliative Sedation Case Study.

    PubMed

    Cornett, Alex; Kuziemsky, Craig

    2015-01-01

    Implementing team based workflows can be complex because of the scope of providers involved and the extent of information exchange and communication that needs to occur. While a workflow may represent the ideal structure of communication that needs to occur, information issues and contextual factors may impact how the workflow is implemented in practice. Understanding these issues will help us better design systems to support team based workflows. In this paper we use a case study of palliative sedation therapy (PST) to model a PST workflow and then use it to identify purposes of communication, information issues and contextual factors that impact them. We then suggest how our findings could inform health information technology (HIT) design to support team based communication workflows.

  6. Quantitative Proteomics of Sleep-Deprived Mouse Brains Reveals Global Changes in Mitochondrial Proteins

    PubMed Central

    Li, Tie-Mei; Zhang, Ju-en; Lin, Rui; Chen, She; Luo, Minmin; Dong, Meng-Qiu

    2016-01-01

    Sleep is a ubiquitous, tightly regulated, and evolutionarily conserved behavior observed in almost all animals. Prolonged sleep deprivation can be fatal, indicating that sleep is a physiological necessity. However, little is known about its core function. To gain insight into this mystery, we used advanced quantitative proteomics technology to survey the global changes in brain protein abundance. Aiming to gain a comprehensive profile, our proteomics workflow included filter-aided sample preparation (FASP), which increased the coverage of membrane proteins; tandem mass tag (TMT) labeling, for relative quantitation; and high resolution, high mass accuracy, high throughput mass spectrometry (MS). In total, we obtained the relative abundance ratios of 9888 proteins encoded by 6070 genes. Interestingly, we observed significant enrichment for mitochondrial proteins among the differentially expressed proteins. This finding suggests that sleep deprivation strongly affects signaling pathways that govern either energy metabolism or responses to mitochondrial stress. Additionally, the differentially-expressed proteins are enriched in pathways implicated in age-dependent neurodegenerative diseases, including Parkinson’s, Huntington’s, and Alzheimer’s, hinting at possible connections between sleep loss, mitochondrial stress, and neurodegeneration. PMID:27684481

  7. REVIEW ARTICLE: Current trends and future requirements for the mass spectrometric investigation of microbial, mammalian and plant metabolomes

    NASA Astrophysics Data System (ADS)

    Dunn, Warwick B.

    2008-03-01

    The functional levels of biological cells or organisms can be separated into the genome, transcriptome, proteome and metabolome. Of these the metabolome offers specific advantages to the investigation of the phenotype of biological systems. The investigation of the metabolome (metabolomics) has only recently appeared as a mainstream scientific discipline and is currently developing rapidly for the study of microbial, plant and mammalian metabolomes. The metabolome pipeline or workflow encompasses the processes of sample collection and preparation, collection of analytical data, raw data pre-processing, data analysis and data storage. Of these processes the collection of analytical data will be discussed in this review with specific interest shown in the application of mass spectrometry in the metabolomics pipeline. The current developments in mass spectrometry platforms (GC-MS, LC-MS, DIMS and imaging MS) and applications of specific interest will be highlighted. The current limitations of these platforms and applications will be discussed with areas requiring further development also highlighted. These include the detectable coverage of the metabolome, the identification of metabolites and the process of converting raw data to biological knowledge.

  8. Fabric phase sorptive extraction of selected penicillin antibiotic residues from intact milk followed by high performance liquid chromatography with diode array detection.

    PubMed

    Samanidou, Victoria; Michaelidou, Katia; Kabir, Abuzar; Furton, Kenneth G

    2017-06-01

    Fabric phase sorptive extraction (FPSE), a novel sorbent-based microextraction method, was evaluated as a simple and rapid strategy for the extraction of four penicillin antibiotic residues (benzylpenicillin, cloxacillin, dicloxacillin and oxacillin) from cows' milk, without prior protein precipitation. Time-consuming solvent evaporation and reconstitution steps were eliminated successfully from the sample preparation workflow. FPSE utilizes a flexible fabric substrate, chemically coated with sol-gel derived, highly efficient, organic-inorganic hybrid sorbent as the extraction medium. Herein short-chain poly(ethylene glycol) provided optimum extraction sensitivity for the selected penicillins, which were analysed using an RP-HPLC method, validated according to the European Decision 657/2002/EC. The limit of quantitation was 10μg/kg for benzylpenicillin, 20μg/kg for cloxacillin, 25μg/kg dicloxacillin and 30μg/kg oxacillin. These are a similar order of magnitude with those reported in the literature and (with the exception of benzylpenicillin) are less than the maximum residue limits (MRL) set by European legislation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A Workflow to Improve the Alignment of Prostate Imaging with Whole-mount Histopathology.

    PubMed

    Yamamoto, Hidekazu; Nir, Dror; Vyas, Lona; Chang, Richard T; Popert, Rick; Cahill, Declan; Challacombe, Ben; Dasgupta, Prokar; Chandra, Ashish

    2014-08-01

    Evaluation of prostate imaging tests against whole-mount histology specimens requires accurate alignment between radiologic and histologic data sets. Misalignment results in false-positive and -negative zones as assessed by imaging. We describe a workflow for three-dimensional alignment of prostate imaging data against whole-mount prostatectomy reference specimens and assess its performance against a standard workflow. Ethical approval was granted. Patients underwent motorized transrectal ultrasound (Prostate Histoscanning) to generate a three-dimensional image of the prostate before radical prostatectomy. The test workflow incorporated steps for axial alignment between imaging and histology, size adjustments following formalin fixation, and use of custom-made parallel cutters and digital caliper instruments. The control workflow comprised freehand cutting and assumed homogeneous block thicknesses at the same relative angles between pathology and imaging sections. Thirty radical prostatectomy specimens were histologically and radiologically processed, either by an alignment-optimized workflow (n = 20) or a control workflow (n = 10). The optimized workflow generated tissue blocks of heterogeneous thicknesses but with no significant drifting in the cutting plane. The control workflow resulted in significantly nonparallel blocks, accurately matching only one out of four histology blocks to their respective imaging data. The image-to-histology alignment accuracy was 20% greater in the optimized workflow (P < .0001), with higher sensitivity (85% vs. 69%) and specificity (94% vs. 73%) for margin prediction in a 5 × 5-mm grid analysis. A significantly better alignment was observed in the optimized workflow. Evaluation of prostate imaging biomarkers using whole-mount histology references should include a test-to-reference spatial alignment workflow. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  10. Conceptual-level workflow modeling of scientific experiments using NMR as a case study

    PubMed Central

    Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R

    2007-01-01

    Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870

  11. FAST: A fully asynchronous and status-tracking pattern for geoprocessing services orchestration

    NASA Astrophysics Data System (ADS)

    Wu, Huayi; You, Lan; Gui, Zhipeng; Gao, Shuang; Li, Zhenqiang; Yu, Jingmin

    2014-09-01

    Geoprocessing service orchestration (GSO) provides a unified and flexible way to implement cross-application, long-lived, and multi-step geoprocessing service workflows by coordinating geoprocessing services collaboratively. Usually, geoprocessing services and geoprocessing service workflows are data and/or computing intensive. The intensity feature may make the execution process of a workflow time-consuming. Since it initials an execution request without blocking other interactions on the client side, an asynchronous mechanism is especially appropriate for GSO workflows. Many critical problems remain to be solved in existing asynchronous patterns for GSO including difficulties in improving performance, status tracking, and clarifying the workflow structure. These problems are a challenge when orchestrating performance efficiency, making statuses instantly available, and constructing clearly structured GSO workflows. A Fully Asynchronous and Status-Tracking (FAST) pattern that adopts asynchronous interactions throughout the whole communication tier of a workflow is proposed for GSO. The proposed FAST pattern includes a mechanism that actively pushes the latest status to clients instantly and economically. An independent proxy was designed to isolate the status tracking logic from the geoprocessing business logic, which assists the formation of a clear GSO workflow structure. A workflow was implemented in the FAST pattern to simulate the flooding process in the Poyang Lake region. Experimental results show that the proposed FAST pattern can efficiently tackle data/computing intensive geoprocessing tasks. The performance of all collaborative partners was improved due to the asynchronous mechanism throughout communication tier. A status-tracking mechanism helps users retrieve the latest running status of a GSO workflow in an efficient and instant way. The clear structure of the GSO workflow lowers the barriers for geospatial domain experts and model designers to compose asynchronous GSO workflows. Most importantly, it provides better support for locating and diagnosing potential exceptions.

  12. Conceptual-level workflow modeling of scientific experiments using NMR as a case study.

    PubMed

    Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R

    2007-01-30

    Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.

  13. Direct PCR amplification of DNA from human bloodstains, saliva, and touch samples collected with microFLOQ® swabs.

    PubMed

    Ambers, Angie; Wiley, Rachel; Novroski, Nicole; Budowle, Bruce

    2018-01-01

    Previous studies have shown that nylon flocked swabs outperform traditional fiber swabs in DNA recovery due to their innovative design and lack of internal absorbent core to entrap cellular materials. The microFLOQ ® Direct swab, a miniaturized version of the 4N6 FLOQSwab ® , has a small swab head that is treated with a lysing agent which allows for direct amplification and DNA profiling from sample collection to final result in less than two hours. Additionally, the microFLOQ ® system subsamples only a minute portion of a stain and preserves the vast majority of the sample for subsequent testing or re-analysis, if desired. The efficacy of direct amplification of DNA from dilute bloodstains, saliva stains, and touch samples was evaluated using microFLOQ ® Direct swabs and the GlobalFiler™ Express system. Comparisons were made to traditional methods to assess the robustness of this alternate workflow. Controlled studies with 1:19 and 1:99 dilutions of bloodstains and saliva stains consistently yielded higher STR peak heights than standard methods with 1ng input DNA from the same samples. Touch samples from common items yielded single source and mixed profiles that were consistent with primary users of the objects. With this novel methodology/workflow, no sample loss occurs and therefore more template DNA is available during amplification. This approach may have important implications for analysis of low quantity and/or degraded samples that plague forensic casework. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Workflow and Proof of Concept for Non-Targeted Analysis of Environmental Samples by LC-MS/MS

    EPA Science Inventory

    The human exposure includes thousands of chemicals acquired through various routes of exposure such as inhalation, ingestion, dermal contact, and indirect ingestion. Rapid assessment and screening of these chemicals is a difficult challenge facing EPA in its mission to protect pu...

  15. Workflow to study genetic biodiversity of aflatoxigenic Aspergillus spp. in Georgia, USA

    USDA-ARS?s Scientific Manuscript database

    Peanut seeds were sampled from the entire state of Georgia in 2014. More than 600 isolates of Aspergillus spp. were collected using modified-dichloran rose Bengal (MDRB) medium, 240 of those isolates were fingerprinted with 25 InDel markers within the aflatoxin-biosynthesis gene cluster (ABC). Clust...

  16. A WorkFlow Engine Oriented Modeling System for Hydrologic Sciences

    NASA Astrophysics Data System (ADS)

    Lu, B.; Piasecki, M.

    2009-12-01

    In recent years the use of workflow engines for carrying out modeling and data analyses tasks has gained increased attention in the science and engineering communities. Tasks like processing raw data coming from sensors and passing these raw data streams to filters for QA/QC procedures possibly require multiple and complicated steps that need to be repeated over and over again. A workflow sequence that carries out a number of steps of various complexity is an ideal approach to deal with these tasks because the sequence can be stored, called up and repeated over again and again. This has several advantages: for one it ensures repeatability of processing steps and with that provenance, an issue that is increasingly important in the science and engineering communities. It also permits the hand off of lengthy and time consuming tasks that can be error prone to a chain of processing actions that are carried out automatically thus reducing the chance for error on the one side and freeing up time to carry out other tasks on the other hand. This paper aims to present the development of a workflow engine embedded modeling system which allows to build up working sequences for carrying out numerical modeling tasks regarding to hydrologic science. Trident, which facilitates creating, running and sharing scientific data analysis workflows, is taken as the central working engine of the modeling system. Current existing functionalities of the modeling system involve digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. They are stored as sequences or modules respectively. The sequences can be invoked to implement their preset tasks in orders, for example, triangulating a watershed from raw DEM. Whereas the modules encapsulated certain functions can be selected and connected through a GUI workboard to form sequences. This modeling system is demonstrated by setting up a new sequence for simulating rainfall-runoff processes which involves embedded Penn State Integrated Hydrologic Model(PIHM) module for hydrologic simulation as a kernel, DEM processing sub-sequence which prepares geospatial data for PIHM, data retrieval module which access time series data from online data repository via web services or from local database, post- data management module which stores , visualizes and analyzes model outputs.

  17. From MODFLOW-96 to MODFLOW-2005, ParFlow and Others: Updates and a Workflow for Up- and Out- Conversion

    NASA Astrophysics Data System (ADS)

    Pierce, S. A.; Hardesty Lewis, D.

    2017-12-01

    MODFLOW (MF) has served for decades as a de facto standard for groundwater modelling. Despite successive versions, legacy MF-96 simulations are still commonly encountered cases. Such is the case for many of the groundwater availability models of the State of Texas. Unfortunately, even the existence of converters to MF's newer versions has not necessarily stimulated their adoption, let alone re-creation of legacy models. This state of affairs may be due to the unfamiliarity of the modeller with the terminal or the FORTRAN programming language, resulting in an inability to address the minor or major bugs, nuances, or limitations in compilation or execution of the conversion programs. Here, we present a workflow that addresses the above intricacies all the while attempting to maintain portability in implementation. This workflow is contructed in the form of a Bash script and - with the geoscience-oriented in mind - re-presented as a Jupyter notebook. First, one may choose whether this executable will run with POSIX-compliance or with a preference towards the Bash facilities, both widely adopted by operating systems. In the same vein, it attempts to function within minimal command environments, which reduces any dependencies. Finally, it is designed to offer parallelism across as many cores and nodes as necessary or as few as desired, whether upon a personal or super-computer. Underlying this workflow are patches such that antiquated tools may compile and execute upon modern hardware. Also, fixes to long-standing bugs and limitations in the existing MF converters have been prepared. Specifically, support for the conversion of -96- and Horizontal Flow Barrier-coupled simulations has been added. More radically, we have laid the foundations of a conversion utility between MF and a similar modeller, ParFlow. Furthermore, the modular approach followed may extend to an application which inter-operates between arbitrary groundwater simulators. In short, an accessible and portable workflow of the process of up-conversion between MODFLOW versions now avails itself to geoscientists. Updated programs within it may allow for re-use, in whole or in part, legacy simulations. Lastly, a generic inter-operator has been established, invoking the possibility of significant ease in the recycling of groundwater data in the future.

  18. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS

    PubMed Central

    Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2016-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971

  19. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS.

    PubMed

    Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2017-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.

  20. The future of scientific workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Peterka, Tom; Altintas, Ilkay

    Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less

  1. Development of a user customizable imaging informatics-based intelligent workflow engine system to enhance rehabilitation clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Martinez, Clarisa; Wang, Jing; Liu, Ye; Liu, Brent

    2014-03-01

    Clinical trials usually have a demand to collect, track and analyze multimedia data according to the workflow. Currently, the clinical trial data management requirements are normally addressed with custom-built systems. Challenges occur in the workflow design within different trials. The traditional pre-defined custom-built system is usually limited to a specific clinical trial and normally requires time-consuming and resource-intensive software development. To provide a solution, we present a user customizable imaging informatics-based intelligent workflow engine system for managing stroke rehabilitation clinical trials with intelligent workflow. The intelligent workflow engine provides flexibility in building and tailoring the workflow in various stages of clinical trials. By providing a solution to tailor and automate the workflow, the system will save time and reduce errors for clinical trials. Although our system is designed for clinical trials for rehabilitation, it may be extended to other imaging based clinical trials as well.

  2. The standard-based open workflow system in GeoBrain (Invited)

    NASA Astrophysics Data System (ADS)

    Di, L.; Yu, G.; Zhao, P.; Deng, M.

    2013-12-01

    GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.

  3. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  4. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  5. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    DOE PAGES

    Malawski, Maciej; Figiela, Kamil; Bubak, Marian; ...

    2015-01-01

    This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize themore » cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.« less

  6. Metaworkflows and Workflow Interoperability for Heliophysics

    NASA Astrophysics Data System (ADS)

    Pierantoni, Gabriele; Carley, Eoin P.

    2014-06-01

    Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They- implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WSPGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.

  7. Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics.

    PubMed

    Wu, Danny T Y; Smart, Nikolas; Ciemins, Elizabeth L; Lanham, Holly J; Lindberg, Curt; Zheng, Kai

    2017-01-01

    To develop a workflow-supported clinical documentation system, it is a critical first step to understand clinical workflow. While Time and Motion studies has been regarded as the gold standard of workflow analysis, this method can be resource consuming and its data may be biased due to the cognitive limitation of human observers. In this study, we aimed to evaluate the feasibility and validity of using EHR audit trail logs to analyze clinical workflow. Specifically, we compared three known workflow changes from our previous study with the corresponding EHR audit trail logs of the study participants. The results showed that EHR audit trail logs can be a valid source for clinical workflow analysis, and can provide an objective view of clinicians' behaviors, multi-dimensional comparisons, and a highly extensible analysis framework.

  8. Observing health professionals' workflow patterns for diabetes care - First steps towards an ontology for EHR services.

    PubMed

    Schweitzer, M; Lasierra, N; Hoerbst, A

    2015-01-01

    Increasing the flexibility from a user-perspective and enabling a workflow based interaction, facilitates an easy user-friendly utilization of EHRs for healthcare professionals' daily work. To offer such versatile EHR-functionality, our approach is based on the execution of clinical workflows by means of a composition of semantic web-services. The backbone of such architecture is an ontology which enables to represent clinical workflows and facilitates the selection of suitable services. In this paper we present the methods and results after running observations of diabetes routine consultations which were conducted in order to identify those workflows and the relation among the included tasks. Mentioned workflows were first modeled by BPMN and then generalized. As a following step in our study, interviews will be conducted with clinical personnel to validate modeled workflows.

  9. Multi-modality molecular imaging: pre-clinical laboratory configuration

    NASA Astrophysics Data System (ADS)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  10. NASA Langley Atmospheric Science Data Center (ASDC) Experience with Aircraft Data

    NASA Astrophysics Data System (ADS)

    Perez, J.; Sorlie, S.; Parker, L.; Mason, K. L.; Rinsland, P.; Kusterer, J.

    2011-12-01

    Over the past decade the NASA Langley ASDC has archived and distributed a variety of aircraft mission data sets. These datasets posed unique challenges for archiving from the rigidity of the archiving system and formats to the lack of metadata. The ASDC developed a state-of-the-art data archive and distribution system to serve the atmospheric sciences data provider and researcher communities. The system, called Archive - Next Generation (ANGe), is designed with a distributed, multi-tier, serviced-based, message oriented architecture enabling new methods for searching, accessing, and customizing data. The ANGe system provides the ease and flexibility to ingest and archive aircraft data through an ad hoc workflow or to develop a new workflow to suit the providers needs. The ASDC will describe the challenges encountered in preparing aircraft data for archiving and distribution. The ASDC is currently providing guidance to the DISCOVER-AQ (Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality) Earth Venture-1 project on developing collection, granule, and browse metadata as well as supporting the ADAM (Airborne Data For Assessing Models) site.

  11. Ca analysis: an Excel based program for the analysis of intracellular calcium transients including multiple, simultaneous regression analysis.

    PubMed

    Greensmith, David J

    2014-01-01

    Here I present an Excel based program for the analysis of intracellular Ca transients recorded using fluorescent indicators. The program can perform all the necessary steps which convert recorded raw voltage changes into meaningful physiological information. The program performs two fundamental processes. (1) It can prepare the raw signal by several methods. (2) It can then be used to analyze the prepared data to provide information such as absolute intracellular Ca levels. Also, the rates of change of Ca can be measured using multiple, simultaneous regression analysis. I demonstrate that this program performs equally well as commercially available software, but has numerous advantages, namely creating a simplified, self-contained analysis workflow. Copyright © 2013 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Development of an impairment-based individualized treatment workflow using an iPad-based software platform.

    PubMed

    Kiran, Swathi; Des Roches, Carrie; Balachandran, Isabel; Ascenso, Elsa

    2014-02-01

    Individuals with language and cognitive deficits following brain damage likely require long-term rehabilitation. Consequently, it is a huge practical problem to provide the continued communication therapy that these individuals require. The present project describes the development of an impairment-based individualized treatment workflow using a software platform called Constant Therapy. This article is organized into two sections. We will first describe the general methods of the treatment workflow for patients involved in this study. There are four steps in this process: (1) the patient's impairment is assessed using standardized tests, (2) the patient is assigned a specific and individualized treatment plan, (3) the patient practices the therapy at home and at the clinic, and (4) the clinician and the patient can analyze the results of the patient's performance remotely and monitor and alter the treatment plan accordingly. The second section provides four case studies that provide a representative sample of participants progressing through their individualized treatment plan. The preliminary results of the patient treatment provide encouraging evidence for the feasibility of a rehabilitation program for individuals with brain damage based on the iPad (Apple Inc., Cupertino, CA). Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. MetaboLyzer: A Novel Statistical Workflow for Analyzing Post-Processed LC/MS Metabolomics Data

    PubMed Central

    Mak, Tytus D.; Laiakis, Evagelia C.; Goudarzi, Maryam; Fornace, Albert J.

    2014-01-01

    Metabolomics, the global study of small molecules in a particular system, has in the last few years risen to become a primary –omics platform for the study of metabolic processes. With the ever-increasing pool of quantitative data yielded from metabolomic research, specialized methods and tools with which to analyze and extract meaningful conclusions from these data are becoming more and more crucial. Furthermore, the depth of knowledge and expertise required to undertake a metabolomics oriented study is a daunting obstacle to investigators new to the field. As such, we have created a new statistical analysis workflow, MetaboLyzer, which aims to both simplify analysis for investigators new to metabolomics, as well as provide experienced investigators the flexibility to conduct sophisticated analysis. MetaboLyzer’s workflow is specifically tailored to the unique characteristics and idiosyncrasies of postprocessed liquid chromatography/mass spectrometry (LC/MS) based metabolomic datasets. It utilizes a wide gamut of statistical tests, procedures, and methodologies that belong to classical biostatistics, as well as several novel statistical techniques that we have developed specifically for metabolomics data. Furthermore, MetaboLyzer conducts rapid putative ion identification and putative biologically relevant analysis via incorporation of four major small molecule databases: KEGG, HMDB, Lipid Maps, and BioCyc. MetaboLyzer incorporates these aspects into a comprehensive workflow that outputs easy to understand statistically significant and potentially biologically relevant information in the form of heatmaps, volcano plots, 3D visualization plots, correlation maps, and metabolic pathway hit histograms. For demonstration purposes, a urine metabolomics data set from a previously reported radiobiology study in which samples were collected from mice exposed to gamma radiation was analyzed. MetaboLyzer was able to identify 243 statistically significant ions out of a total of 1942. Numerous putative metabolites and pathways were found to be biologically significant from the putative ion identification workflow. PMID:24266674

  14. Basic design of MRM assays for peptide quantification.

    PubMed

    James, Andrew; Jorgensen, Claus

    2010-01-01

    With the recent availability and accessibility of mass spectrometry for basic and clinical research, the requirement for stable, sensitive, and reproducible assays to specifically detect proteins of interest has increased. Multiple reaction monitoring (MRM) or selective reaction monitoring (SRM) is a highly selective, sensitive, and robust assay to monitor the presence and amount of biomolecules. Until recently, MRM was typically used for the detection of drugs and other biomolecules from body fluids. With increased focus on biomarkers and systems biology approaches, researchers in the proteomics field have taken advantage of this approach. In this chapter, we will introduce the reader to the basic principle of designing and optimizing an MRM workflow. We provide examples of MRM workflows for standard proteomic samples and provide suggestions for the reader who is interested in using MRM for quantification.

  15. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.

  16. Data delivery workflow in an academic information warehouse.

    PubMed

    Erdal, Selnur; Rogers, Patrick; Santangelo, Jennifer; Buskirk, Jason; Ostrander, Michael; Liu, Jianhua; Kamal, Jyoti

    2008-11-06

    The Ohio State University Medical Center (OSUMC) Information Warehouse (IW) collects data from many systems throughout the OSUMC on load cycles ranging from real-time to on-demand. The data then is prepared for delivery to diversity of customers across the clinical, education, and research sectors of the OSUMC. Some of the data collected at the IW include patient management, billing and finance, procedures, medications, lab results, clinical reports, physician order entry, outcomes, demographics, and so on. This data is made available to the users of the IW in variety of formats and methods.

  17. Design and implementation of workflow engine for service-oriented architecture

    NASA Astrophysics Data System (ADS)

    Peng, Shuqing; Duan, Huining; Chen, Deyun

    2009-04-01

    As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.

  18. Producing an Infrared Multiwavelength Galactic Plane Atlas Using Montage, Pegasus, and Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Rynge, M.; Juve, G.; Kinney, J.; Good, J.; Berriman, B.; Merrihew, A.; Deelman, E.

    2014-05-01

    In this paper, we describe how to leverage cloud resources to generate large-scale mosaics of the galactic plane in multiple wavelengths. Our goal is to generate a 16-wavelength infrared Atlas of the Galactic Plane at a common spatial sampling of 1 arcsec, processed so that they appear to have been measured with a single instrument. This will be achieved by using the Montage image mosaic engine process observations from the 2MASS, GLIMPSE, MIPSGAL, MSX and WISE datasets, over a wavelength range of 1 μm to 24 μm, and by using the Pegasus Workflow Management System for managing the workload. When complete, the Atlas will be made available to the community as a data product. We are generating images that cover ±180° in Galactic longitude and ±20° in Galactic latitude, to the extent permitted by the spatial coverage of each dataset. Each image will be 5°x5° in size (including an overlap of 1° with neighboring tiles), resulting in an atlas of 1,001 images. The final size will be about 50 TBs. This paper will focus on the computational challenges, solutions, and lessons learned in producing the Atlas. To manage the computation we are using the Pegasus Workflow Management System, a mature, highly fault-tolerant system now in release 4.2.2 that has found wide applicability across many science disciplines. A scientific workflow describes the dependencies between the tasks and in most cases the workflow is described as a directed acyclic graph, where the nodes are tasks and the edges denote the task dependencies. A defining property for a scientific workflow is that it manages data flow between tasks. Applied to the galactic plane project, each 5 by 5 mosaic is a Pegasus workflow. Pegasus is used to fetch the source images, execute the image mosaicking steps of Montage, and store the final outputs in a storage system. As these workflows are very I/O intensive, care has to be taken when choosing what infrastructure to execute the workflow on. In our setup, we choose to use dynamically provisioned compute clusters running on the Amazon Elastic Compute Cloud (EC2). All our instances are using the same base image, which is configured to come up as a master node by default. The master node is a central instance from where the workflow can be managed. Additional worker instances are provisioned and configured to accept work assignments from the master node. The system allows for adding/removing workers in an ad hoc fashion, and could be run in large configurations. To-date we have performed 245,000 CPU hours of computing and generated 7,029 images and totaling 30 TB. With the current set up our runtime would be 340,000 CPU hours for the whole project. Using spot m2.4xlarge instances, the cost would be approximately $5,950. Using faster AWS instances, such as cc2.8xlarge could potentially decrease the total CPU hours and further reduce the compute costs. The paper will explore these tradeoffs.

  19. MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.

    PubMed

    Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang

    2005-01-01

    MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.

  20. Automated selected reaction monitoring data analysis workflow for large-scale targeted proteomic studies.

    PubMed

    Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi

    2013-08-01

    Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.

  1. MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory

    PubMed Central

    Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang

    2005-01-01

    MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721

  2. Global Relative Quantification with Liquid Chromatography–Matrix-assisted Laser Desorption Ionization Time-of-flight (LC-MALDI-TOF)—Cross–validation with LTQ-Orbitrap Proves Reliability and Reveals Complementary Ionization Preferences*

    PubMed Central

    Hessling, Bernd; Büttner, Knut; Hecker, Michael; Becher, Dörte

    2013-01-01

    Quantitative LC-MALDI is an underrepresented method, especially in large-scale experiments. The additional fractionation step that is needed for most MALDI-TOF-TOF instruments, the comparatively long analysis time, and the very limited number of established software tools for the data analysis render LC-MALDI a niche application for large quantitative analyses beside the widespread LC–electrospray ionization workflows. Here, we used LC-MALDI in a relative quantification analysis of Staphylococcus aureus for the first time on a proteome-wide scale. Samples were analyzed in parallel with an LTQ-Orbitrap, which allowed cross-validation with a well-established workflow. With nearly 850 proteins identified in the cytosolic fraction and quantitative data for more than 550 proteins obtained with the MASCOT Distiller software, we were able to prove that LC-MALDI is able to process highly complex samples. The good correlation of quantities determined via this method and the LTQ-Orbitrap workflow confirmed the high reliability of our LC-MALDI approach for global quantification analysis. Because the existing literature reports differences for MALDI and electrospray ionization preferences and the respective experimental work was limited by technical or methodological constraints, we systematically compared biochemical attributes of peptides identified with either instrument. This genome-wide, comprehensive study revealed biases toward certain peptide properties for both MALDI-TOF-TOF- and LTQ-Orbitrap-based approaches. These biases are based on almost 13,000 peptides and result in a general complementarity of the two approaches that should be exploited in future experiments. PMID:23788530

  3. Global relative quantification with liquid chromatography-matrix-assisted laser desorption ionization time-of-flight (LC-MALDI-TOF)--cross-validation with LTQ-Orbitrap proves reliability and reveals complementary ionization preferences.

    PubMed

    Hessling, Bernd; Büttner, Knut; Hecker, Michael; Becher, Dörte

    2013-10-01

    Quantitative LC-MALDI is an underrepresented method, especially in large-scale experiments. The additional fractionation step that is needed for most MALDI-TOF-TOF instruments, the comparatively long analysis time, and the very limited number of established software tools for the data analysis render LC-MALDI a niche application for large quantitative analyses beside the widespread LC-electrospray ionization workflows. Here, we used LC-MALDI in a relative quantification analysis of Staphylococcus aureus for the first time on a proteome-wide scale. Samples were analyzed in parallel with an LTQ-Orbitrap, which allowed cross-validation with a well-established workflow. With nearly 850 proteins identified in the cytosolic fraction and quantitative data for more than 550 proteins obtained with the MASCOT Distiller software, we were able to prove that LC-MALDI is able to process highly complex samples. The good correlation of quantities determined via this method and the LTQ-Orbitrap workflow confirmed the high reliability of our LC-MALDI approach for global quantification analysis. Because the existing literature reports differences for MALDI and electrospray ionization preferences and the respective experimental work was limited by technical or methodological constraints, we systematically compared biochemical attributes of peptides identified with either instrument. This genome-wide, comprehensive study revealed biases toward certain peptide properties for both MALDI-TOF-TOF- and LTQ-Orbitrap-based approaches. These biases are based on almost 13,000 peptides and result in a general complementarity of the two approaches that should be exploited in future experiments.

  4. A three-level atomicity model for decentralized workflow management systems

    NASA Astrophysics Data System (ADS)

    Ben-Shaul, Israel Z.; Heineman, George T.

    1996-12-01

    A workflow management system (WFMS) employs a workflow manager (WM) to execute and automate the various activities within a workflow. To protect the consistency of data, the WM encapsulates each activity with a transaction; a transaction manager (TM) then guarantees the atomicity of activities. Since workflows often group several activities together, the TM is responsible for guaranteeing the atomicity of these units. There are scalability issues, however, with centralized WFMSs. Decentralized WFMSs provide an architecture for multiple autonomous WFMSs to interoperate, thus accommodating multiple workflows and geographically-dispersed teams. When atomic units are composed of activities spread across multiple WFMSs, however, there is a conflict between global atomicity and local autonomy of each WFMS. This paper describes a decentralized atomicity model that enables workflow administrators to specify the scope of multi-site atomicity based upon the desired semantics of multi-site tasks in the decentralized WFMS. We describe an architecture that realizes our model and execution paradigm.

  5. A Tool Supporting Collaborative Data Analytics Workflow Design and Management

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Bao, Q.; Lee, T. J.

    2016-12-01

    Collaborative experiment design could significantly enhance the sharing and adoption of the data analytics algorithms and models emerged in Earth science. Existing data-oriented workflow tools, however, are not suitable to support collaborative design of such a workflow, to name a few, to support real-time co-design; to track how a workflow evolves over time based on changing designs contributed by multiple Earth scientists; and to capture and retrieve collaboration knowledge on workflow design (discussions that lead to a design). To address the aforementioned challenges, we have designed and developed a technique supporting collaborative data-oriented workflow composition and management, as a key component toward supporting big data collaboration through the Internet. Reproducibility and scalability are two major targets demanding fundamental infrastructural support. One outcome of the project os a software tool, supporting an elastic number of groups of Earth scientists to collaboratively design and compose data analytics workflows through the Internet. Instead of recreating the wheel, we have extended an existing workflow tool VisTrails into an online collaborative environment as a proof of concept.

  6. Safety and feasibility of STAT RAD: Improvement of a novel rapid tomotherapy-based radiation therapy workflow by failure mode and effects analysis.

    PubMed

    Jones, Ryan T; Handsfield, Lydia; Read, Paul W; Wilson, David D; Van Ausdal, Ray; Schlesinger, David J; Siebers, Jeffrey V; Chen, Quan

    2015-01-01

    The clinical challenge of radiation therapy (RT) for painful bone metastases requires clinicians to consider both treatment efficacy and patient prognosis when selecting a radiation therapy regimen. The traditional RT workflow requires several weeks for common palliative RT schedules of 30 Gy in 10 fractions or 20 Gy in 5 fractions. At our institution, we have created a new RT workflow termed "STAT RAD" that allows clinicians to perform computed tomographic (CT) simulation, planning, and highly conformal single fraction treatment delivery within 2 hours. In this study, we evaluate the safety and feasibility of the STAT RAD workflow. A failure mode and effects analysis (FMEA) was performed on the STAT RAD workflow, including development of a process map, identification of potential failure modes, description of the cause and effect, temporal occurrence, and team member involvement in each failure mode, and examination of existing safety controls. A risk probability number (RPN) was calculated for each failure mode. As necessary, workflow adjustments were then made to safeguard failure modes of significant RPN values. After workflow alterations, RPN numbers were again recomputed. A total of 72 potential failure modes were identified in the pre-FMEA STAT RAD workflow, of which 22 met the RPN threshold for clinical significance. Workflow adjustments included the addition of a team member checklist, changing simulation from megavoltage CT to kilovoltage CT, alteration of patient-specific quality assurance testing, and allocating increased time for critical workflow steps. After these modifications, only 1 failure mode maintained RPN significance; patient motion after alignment or during treatment. Performing the FMEA for the STAT RAD workflow before clinical implementation has significantly strengthened the safety and feasibility of STAT RAD. The FMEA proved a valuable evaluation tool, identifying potential problem areas so that we could create a safer workflow. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  7. RNA-DNA hybrid (R-loop) immunoprecipitation mapping: an analytical workflow to evaluate inherent biases

    PubMed Central

    Halász, László; Karányi, Zsolt; Boros-Oláh, Beáta; Kuik-Rózsa, Tímea; Sipos, Éva; Nagy, Éva; Mosolygó-L, Ágnes; Mázló, Anett; Rajnavölgyi, Éva; Halmos, Gábor; Székvölgyi, Lóránt

    2017-01-01

    The impact of R-loops on the physiology and pathology of chromosomes has been demonstrated extensively by chromatin biology research. The progress in this field has been driven by technological advancement of R-loop mapping methods that largely relied on a single approach, DNA-RNA immunoprecipitation (DRIP). Most of the DRIP protocols use the experimental design that was developed by a few laboratories, without paying attention to the potential caveats that might affect the outcome of RNA-DNA hybrid mapping. To assess the accuracy and utility of this technology, we pursued an analytical approach to estimate inherent biases and errors in the DRIP protocol. By performing DRIP-sequencing, qPCR, and receiver operator characteristic (ROC) analysis, we tested the effect of formaldehyde fixation, cell lysis temperature, mode of genome fragmentation, and removal of free RNA on the efficacy of RNA-DNA hybrid detection and implemented workflows that were able to distinguish complex and weak DRIP signals in a noisy background with high confidence. We also show that some of the workflows perform poorly and generate random answers. Furthermore, we found that the most commonly used genome fragmentation method (restriction enzyme digestion) led to the overrepresentation of lengthy DRIP fragments over coding ORFs, and this bias was enhanced at the first exons. Biased genome sampling severely compromised mapping resolution and prevented the assignment of precise biological function to a significant fraction of R-loops. The revised workflow presented herein is established and optimized using objective ROC analyses and provides reproducible and highly specific RNA-DNA hybrid detection. PMID:28341774

  8. Uncertainty in training image-based inversion of hydraulic head data constrained to ERT data: Workflow and case study

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Caers, Jef

    2015-07-01

    In inverse problems, investigating uncertainty in the posterior distribution of model parameters is as important as matching data. In recent years, most efforts have focused on techniques to sample the posterior distribution with reasonable computational costs. Within a Bayesian context, this posterior depends on the prior distribution. However, most of the studies ignore modeling the prior with realistic geological uncertainty. In this paper, we propose a workflow inspired by a Popper-Bayes philosophy that data should first be used to falsify models, then only be considered for matching. We propose a workflow consisting of three steps: (1) in defining the prior, we interpret multiple alternative geological scenarios from literature (architecture of facies) and site-specific data (proportions of facies). Prior spatial uncertainty is modeled using multiple-point geostatistics, where each scenario is defined using a training image. (2) We validate these prior geological scenarios by simulating electrical resistivity tomography (ERT) data on realizations of each scenario and comparing them to field ERT in a lower dimensional space. In this second step, the idea is to probabilistically falsify scenarios with ERT, meaning that scenarios which are incompatible receive an updated probability of zero while compatible scenarios receive a nonzero updated belief. (3) We constrain the hydrogeological model with hydraulic head and ERT using a stochastic search method. The workflow is applied to a synthetic and a field case studies in an alluvial aquifer. This study highlights the importance of considering and estimating prior uncertainty (without data) through a process of probabilistic falsification.

  9. MAAMD: a workflow to standardize meta-analyses and comparison of affymetrix microarray data

    PubMed Central

    2014-01-01

    Background Mandatory deposit of raw microarray data files for public access, prior to study publication, provides significant opportunities to conduct new bioinformatics analyses within and across multiple datasets. Analysis of raw microarray data files (e.g. Affymetrix CEL files) can be time consuming, complex, and requires fundamental computational and bioinformatics skills. The development of analytical workflows to automate these tasks simplifies the processing of, improves the efficiency of, and serves to standardize multiple and sequential analyses. Once installed, workflows facilitate the tedious steps required to run rapid intra- and inter-dataset comparisons. Results We developed a workflow to facilitate and standardize Meta-Analysis of Affymetrix Microarray Data analysis (MAAMD) in Kepler. Two freely available stand-alone software tools, R and AltAnalyze were embedded in MAAMD. The inputs of MAAMD are user-editable csv files, which contain sample information and parameters describing the locations of input files and required tools. MAAMD was tested by analyzing 4 different GEO datasets from mice and drosophila. MAAMD automates data downloading, data organization, data quality control assesment, differential gene expression analysis, clustering analysis, pathway visualization, gene-set enrichment analysis, and cross-species orthologous-gene comparisons. MAAMD was utilized to identify gene orthologues responding to hypoxia or hyperoxia in both mice and drosophila. The entire set of analyses for 4 datasets (34 total microarrays) finished in ~ one hour. Conclusions MAAMD saves time, minimizes the required computer skills, and offers a standardized procedure for users to analyze microarray datasets and make new intra- and inter-dataset comparisons. PMID:24621103

  10. Image processing and classification procedures for analysis of sub-decimeter imagery acquired with an unmanned aircraft over arid rangelands

    USDA-ARS?s Scientific Manuscript database

    Using five centimeter resolution images acquired with an unmanned aircraft system (UAS), we developed and evaluated an image processing workflow that included the integration of resolution-appropriate field sampling, feature selection, object-based image analysis, and processing approaches for UAS i...

  11. Multi-mode acquisition (MMA): An MS/MS acquisition strategy for maximizing selectivity, specificity and sensitivity of DIA product ion spectra.

    PubMed

    Williams, Brad J; Ciavarini, Steve J; Devlin, Curt; Cohn, Steven M; Xie, Rong; Vissers, Johannes P C; Martin, LeRoy B; Caswell, Allen; Langridge, James I; Geromanos, Scott J

    2016-08-01

    In proteomics studies, it is generally accepted that depth of coverage and dynamic range is limited in data-directed acquisitions. The serial nature of the method limits both sensitivity and the number of precursor ions that can be sampled. To that end, a number of data-independent acquisition (DIA) strategies have been introduced with these methods, for the most part, immune to the sampling issue; nevertheless, some do have other limitations with respect to sensitivity. The major limitation with DIA approaches is interference, i.e., MS/MS spectra are highly chimeric and often incapable of being identified using conventional database search engines. Utilizing each available dimension of separation prior to ion detection, we present a new multi-mode acquisition (MMA) strategy multiplexing both narrowband and wideband DIA acquisitions in a single analytical workflow. The iterative nature of the MMA workflow limits the adverse effects of interference with minimal loss in sensitivity. Qualitative identification can be performed by selected ion chromatograms or conventional database search strategies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.

  13. 76 FR 71928 - Defense Federal Acquisition Regulation Supplement; Updates to Wide Area WorkFlow (DFARS Case 2011...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-21

    ... Defense Federal Acquisition Regulation Supplement; Updates to Wide Area WorkFlow (DFARS Case 2011-D027... Wide Area WorkFlow (WAWF) and TRICARE Encounter Data System (TEDS). WAWF, which electronically... civil emergencies, when access to Wide Area WorkFlow by those contractors is not feasible; (4) Purchases...

  14. An Auto-management Thesis Program WebMIS Based on Workflow

    NASA Astrophysics Data System (ADS)

    Chang, Li; Jie, Shi; Weibo, Zhong

    An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.

  15. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments

    NASA Astrophysics Data System (ADS)

    Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.

    Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  16. SU-F-E-10: Student-Driven Exploration of Radiographic Material Properties, Phantom Construction, and Clinical Workflows Or: The Extraordinary Life of CANDY MAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahon, RN; Riblett, MJ; Hugo, GD

    Purpose: To develop a hands-on learning experience that explores the radiological and structural properties of everyday items and applies this knowledge to design a simple phantom for radiotherapy exercises. Methods: Students were asked to compile a list of readily available materials thought to have radiation attenuation properties similar to tissues within the human torso. Participants scanned samples of suggested materials and regions of interest (ROIs) were used to characterize bulk attenuation properties. Properties of each material were assessed via comparison to a Gammex Tissue characterization phantom and used to construct a list of inexpensive near-tissue-equivalent materials. Critical discussions focusing onmore » samples found to differ from student expectations were used to revise and narrow the comprehensive list. From their newly acquired knowledge, students designed and constructed a simple thoracic phantom for use in a simulated clinical workflow. Students were tasked with setting up the phantom and acquiring planning CT images for use in treatment planning and dose delivery. Results: Under engineer and physicist supervision, students were trained to use a CT simulator and acquired images for approximately 60 different foodstuffs, candies, and household items. Through peer discussion, students gained valuable insights and were made to review preconceptions about radiographic material properties. From a subset of imaged materials, a simple phantom was successfully designed and constructed to represent a human thorax. Students received hands-on experience with clinical treatment workflows by learning how to perform CT simulation, create a treatment plan for an embedded tumor, align the phantom for treatment, and deliver a treatment fraction. Conclusion: In this activity, students demonstrated their ability to reason through the radiographic material selection process, construct a simple phantom to specifications, and exercise their knowledge of clinical workflows. Furthermore, the enjoyable and inexpensive nature of this project proved to attract participant interest and drive creative exploration. Mahon and Riblett have nothing to disclose; Hugo has a research agreement with Phillips Medical systems, a license agreement with Varian Medical Systems, research grants from the National Institute of Health. Authors do not have any potential conflicts of interest to disclose.« less

  17. Quantitative Detection of Low-Abundance Transcripts at Single-Cell Level in Human Epidermal Keratinocytes by Digital Droplet Reverse Transcription-Polymerase Chain Reaction.

    PubMed

    Auvré, Frédéric; Coutier, Julien; Martin, Michèle T; Fortunel, Nicolas O

    2018-05-08

    Genetic and epigenetic characterization of the large cellular diversity observed within tissues is essential to understanding the molecular networks that ensure the regulation of homeostasis, repair, and regeneration, but also pathophysiological processes. Skin is composed of multiple cell lineages and is therefore fully concerned by this complexity. Even within one particular lineage, such as epidermal keratinocytes, different immaturity statuses or differentiation stages are represented, which are still incompletely characterized. Accordingly, there is presently great demand for methods and technologies enabling molecular investigation at single-cell level. Also, most current methods used to analyze gene expression at RNA level, such as RT-qPCR, do not directly provide quantitative data, but rather comparative ratios between two conditions. A second important need in skin biology is thus to determine the number of RNA molecules in a given cell sample. Here, we describe a workflow that we have set up to meet these specific needs, by means of transcript quantification in cellular micro-samples using flow cytometry sorting and reverse transcription-digital droplet polymerase chain reaction. As a proof-of-principle, the workflow was tested for the detection of transcription factor transcripts expressed at low levels in keratinocyte precursor cells. A linear correlation was found between quantification values and keratinocyte input numbers in a low quantity range from 40 cells to 1 cell. Interpretable signals were repeatedly obtained from single-cell samples corresponding to estimated expression levels as low as 10-20 transcript copies per keratinocyte or less. The present workflow may have broad applications for the detection and quantification of low-abundance nucleic acid species in single cells, opening up perspectives for the study of cell-to-cell genetic and molecular heterogeneity. Interestingly, the process described here does not require internal references such as house-keeping gene expression, as it is initiated with defined cell numbers, precisely sorted by flow cytometry.

  18. Structuring clinical workflows for diabetes care: an overview of the OntoHealth approach.

    PubMed

    Schweitzer, M; Lasierra, N; Oberbichler, S; Toma, I; Fensel, A; Hoerbst, A

    2014-01-01

    Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view.

  19. Structuring Clinical Workflows for Diabetes Care

    PubMed Central

    Lasierra, N.; Oberbichler, S.; Toma, I.; Fensel, A.; Hoerbst, A.

    2014-01-01

    Summary Background Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. Objectives The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. Methods A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. Results This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. Conclusions The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view. PMID:25024765

  20. Modelling and analysis of workflow for lean supply chains

    NASA Astrophysics Data System (ADS)

    Ma, Jinping; Wang, Kanliang; Xu, Lida

    2011-11-01

    Cross-organisational workflow systems are a component of enterprise information systems which support collaborative business process among organisations in supply chain. Currently, the majority of workflow systems is developed in perspectives of information modelling without considering actual requirements of supply chain management. In this article, we focus on the modelling and analysis of the cross-organisational workflow systems in the context of lean supply chain (LSC) using Petri nets. First, the article describes the assumed conditions of cross-organisation workflow net according to the idea of LSC and then discusses the standardisation of collaborating business process between organisations in the context of LSC. Second, the concept of labelled time Petri nets (LTPNs) is defined through combining labelled Petri nets with time Petri nets, and the concept of labelled time workflow nets (LTWNs) is also defined based on LTPNs. Cross-organisational labelled time workflow nets (CLTWNs) is then defined based on LTWNs. Third, the article proposes the notion of OR-silent CLTWNS and a verifying approach to the soundness of LTWNs and CLTWNs. Finally, this article illustrates how to use the proposed method by a simple example. The purpose of this research is to establish a formal method of modelling and analysis of workflow systems for LSC. This study initiates a new perspective of research on cross-organisational workflow management and promotes operation management of LSC in real world settings.

  1. myExperiment: a repository and social network for the sharing of bioinformatics workflows

    PubMed Central

    Goble, Carole A.; Bhagat, Jiten; Aleksejevs, Sergejs; Cruickshank, Don; Michaelides, Danius; Newman, David; Borkum, Mark; Bechhofer, Sean; Roos, Marco; Li, Peter; De Roure, David

    2010-01-01

    myExperiment (http://www.myexperiment.org) is an online research environment that supports the social sharing of bioinformatics workflows. These workflows are procedures consisting of a series of computational tasks using web services, which may be performed on data from its retrieval, integration and analysis, to the visualization of the results. As a public repository of workflows, myExperiment allows anybody to discover those that are relevant to their research, which can then be reused and repurposed to their specific requirements. Conversely, developers can submit their workflows to myExperiment and enable them to be shared in a secure manner. Since its release in 2007, myExperiment currently has over 3500 registered users and contains more than 1000 workflows. The social aspect to the sharing of these workflows is facilitated by registered users forming virtual communities bound together by a common interest or research project. Contributors of workflows can build their reputation within these communities by receiving feedback and credit from individuals who reuse their work. Further documentation about myExperiment including its REST web service is available from http://wiki.myexperiment.org. Feedback and requests for support can be sent to bugs@myexperiment.org. PMID:20501605

  2. Traversing the many paths of workflow research: developing a conceptual framework of workflow terminology through a systematic literature review

    PubMed Central

    Novak, Laurie L; Johnson, Kevin B; Lorenzi, Nancy M

    2010-01-01

    The objective of this review was to describe methods used to study and model workflow. The authors included studies set in a variety of industries using qualitative, quantitative and mixed methods. Of the 6221 matching abstracts, 127 articles were included in the final corpus. The authors collected data from each article on researcher perspective, study type, methods type, specific methods, approaches to evaluating quality of results, definition of workflow and dependent variables. Ethnographic observation and interviews were the most frequently used methods. Long study durations revealed the large time commitment required for descriptive workflow research. The most frequently discussed technique for evaluating quality of study results was triangulation. The definition of the term “workflow” and choice of methods for studying workflow varied widely across research areas and researcher perspectives. The authors developed a conceptual framework of workflow-related terminology for use in future research and present this model for use by other researchers. PMID:20442143

  3. Digitization workflows for flat sheets and packets of plants, algae, and fungi1

    PubMed Central

    Nelson, Gil; Sweeney, Patrick; Wallace, Lisa E.; Rabeler, Richard K.; Allard, Dorothy; Brown, Herrick; Carter, J. Richard; Denslow, Michael W.; Ellwood, Elizabeth R.; Germain-Aubrey, Charlotte C.; Gilbert, Ed; Gillespie, Emily; Goertzen, Leslie R.; Legler, Ben; Marchant, D. Blaine; Marsico, Travis D.; Morris, Ashley B.; Murrell, Zack; Nazaire, Mare; Neefus, Chris; Oberreiter, Shanna; Paul, Deborah; Ruhfel, Brad R.; Sasek, Thomas; Shaw, Joey; Soltis, Pamela S.; Watson, Kimberly; Weeks, Andrea; Mast, Austin R.

    2015-01-01

    Effective workflows are essential components in the digitization of biodiversity specimen collections. To date, no comprehensive, community-vetted workflows have been published for digitizing flat sheets and packets of plants, algae, and fungi, even though latest estimates suggest that only 33% of herbarium specimens have been digitally transcribed, 54% of herbaria use a specimen database, and 24% are imaging specimens. In 2012, iDigBio, the U.S. National Science Foundation’s (NSF) coordinating center and national resource for the digitization of public, nonfederal U.S. collections, launched several working groups to address this deficiency. Here, we report the development of 14 workflow modules with 7–36 tasks each. These workflows represent the combined work of approximately 35 curators, directors, and collections managers representing more than 30 herbaria, including 15 NSF-supported plant-related Thematic Collections Networks and collaboratives. The workflows are provided for download as Portable Document Format (PDF) and Microsoft Word files. Customization of these workflows for specific institutional implementation is encouraged. PMID:26421256

  4. Quantitation of 87 Proteins by nLC-MRM/MS in Human Plasma: Workflow for Large-Scale Analysis of Biobank Samples.

    PubMed

    Rezeli, Melinda; Sjödin, Karin; Lindberg, Henrik; Gidlöf, Olof; Lindahl, Bertil; Jernberg, Tomas; Spaak, Jonas; Erlinge, David; Marko-Varga, György

    2017-09-01

    A multiple reaction monitoring (MRM) assay was developed for precise quantitation of 87 plasma proteins including the three isoforms of apolipoprotein E (APOE) associated with cardiovascular diseases using nanoscale liquid chromatography separation and stable isotope dilution strategy. The analytical performance of the assay was evaluated and we found an average technical variation of 4.7% in 4-5 orders of magnitude dynamic range (≈0.2 mg/L to 4.5 g/L) from whole plasma digest. Here, we report a complete workflow, including sample processing adapted to 96-well plate format and normalization strategy for large-scale studies. To further investigate the MS-based quantitation the amount of six selected proteins was measured by routinely used clinical chemistry assays as well and the two methods showed excellent correlation with high significance (p-value < 10e-5) for the six proteins, in addition for the cardiovascular predictor factor, APOB: APOA1 ratio (r = 0.969, p-value < 10e-5). Moreover, we utilized the developed assay for screening of biobank samples from patients with myocardial infarction and performed the comparative analysis of patient groups with STEMI (ST- segment elevation myocardial infarction), NSTEMI (non ST- segment elevation myocardial infarction) and type-2 AMI (type-2 myocardial infarction) patients.

  5. Development of a sequential workflow based on LC-PRM for the verification of endometrial cancer protein biomarkers in uterine aspirate samples.

    PubMed

    Martinez-Garcia, Elena; Lesur, Antoine; Devis, Laura; Campos, Alexandre; Cabrera, Silvia; van Oostrum, Jan; Matias-Guiu, Xavier; Gil-Moreno, Antonio; Reventos, Jaume; Colas, Eva; Domon, Bruno

    2016-08-16

    About 30% of endometrial cancer (EC) patients are diagnosed at an advanced stage of the disease, which is associated with a drastic decrease in the 5-year survival rate. The identification of biomarkers in uterine aspirate samples, which are collected by a minimally invasive procedure, would improve early diagnosis of EC. We present a sequential workflow to select from a list of potential EC biomarkers, those which are the most promising to enter a validation study. After the elimination of confounding contributions by residual blood proteins, 52 potential biomarkers were analyzed in uterine aspirates from 20 EC patients and 18 non-EC controls by a high-resolution accurate mass spectrometer operated in parallel reaction monitoring mode. The differential abundance of 26 biomarkers was observed, and among them ten proteins showed a high sensitivity and specificity (AUC > 0.9). The study demonstrates that uterine aspirates are valuable samples for EC protein biomarkers screening. It also illustrates the importance of a biomarker verification phase to fill the gap between discovery and validation studies and highlights the benefits of high resolution mass spectrometry for this purpose. The proteins verified in this study have an increased likelihood to become a clinical assay after a subsequent validation phase.

  6. Advances in metabolome information retrieval: turning chemistry into biology. Part I: analytical chemistry of the metabolome.

    PubMed

    Tebani, Abdellah; Afonso, Carlos; Bekri, Soumeya

    2018-05-01

    Metabolites are small molecules produced by enzymatic reactions in a given organism. Metabolomics or metabolic phenotyping is a well-established omics aimed at comprehensively assessing metabolites in biological systems. These comprehensive analyses use analytical platforms, mainly nuclear magnetic resonance spectroscopy and mass spectrometry, along with associated separation methods to gather qualitative and quantitative data. Metabolomics holistically evaluates biological systems in an unbiased, data-driven approach that may ultimately support generation of hypotheses. The approach inherently allows the molecular characterization of a biological sample with regard to both internal (genetics) and environmental (exosome, microbiome) influences. Metabolomics workflows are based on whether the investigator knows a priori what kind of metabolites to assess. Thus, a targeted metabolomics approach is defined as a quantitative analysis (absolute concentrations are determined) or a semiquantitative analysis (relative intensities are determined) of a set of metabolites that are possibly linked to common chemical classes or a selected metabolic pathway. An untargeted metabolomics approach is a semiquantitative analysis of the largest possible number of metabolites contained in a biological sample. This is part I of a review intending to give an overview of the state of the art of major metabolic phenotyping technologies. Furthermore, their inherent analytical advantages and limits regarding experimental design, sample handling, standardization and workflow challenges are discussed.

  7. Wearable technology as a booster of clinical care

    NASA Astrophysics Data System (ADS)

    Jonas, Stephan; Hannig, Andreas; Spreckelsen, Cord; Deserno, Thomas M.

    2014-03-01

    Wearable technology defines a new class of smart devices that are accessories or clothing equipped with computational power and sensors, like Google Glass. In this work, we propose a novel concept for supporting everyday clinical pathways with wearable technology. In contrast to most prior work, we are not focusing on the omnipresent screen to display patient information or images, but are trying to maintain existing workflows. To achieve this, our system supports clinical staff as a documenting observer, only intervening adequately if problems are detected. Using the example of medication preparation and administration, a task known to be prone to errors, we demonstrate the full potential of the new devices. Patient and medication identifier are captured with the built-in camera, and the information is send to a transaction server. The server communicates with the hospital information system to obtain patient records and medication information. The system then analyses the new medication for possible side-effects and interactions with already administered drugs. The result is sent to the device while encapsulating all sensitive information respecting data security and privacy. The user only sees a traffic light style encoded feedback to avoid distraction. The server can reduce documentation efforts and reports in real-time on possible problems during medication preparation or administration. In conclusion, we designed a secure system around three basic principles with many applications in everyday clinical work: (i) interaction and distraction is kept as low as possible; (ii) no patient data is displayed; and (iii) device is pure observer, not part of the workflow. By reducing errors and documentation burden, our approach has the capability to boost clinical care.

  8. Economic and workflow analysis of a blood bank automated system.

    PubMed

    Shin, Kyung-Hwa; Kim, Hyung Hoi; Chang, Chulhun L; Lee, Eun Yup

    2013-07-01

    This study compared the estimated costs and times required for ABO/Rh(D) typing and unexpected antibody screening using an automated system and manual methods. The total cost included direct and labor costs. Labor costs were calculated on the basis of the average operator salaries and unit values (minutes), which was the hands-on time required to test one sample. To estimate unit values, workflows were recorded on video, and the time required for each process was analyzed separately. The unit values of ABO/Rh(D) typing using the manual method were 5.65 and 8.1 min during regular and unsocial working hours, respectively. The unit value was less than 3.5 min when several samples were tested simultaneously. The unit value for unexpected antibody screening was 2.6 min. The unit values using the automated method for ABO/Rh(D) typing, unexpected antibody screening, and both simultaneously were all 1.5 min. The total cost of ABO/Rh(D) typing of only one sample using the automated analyzer was lower than that of testing only one sample using the manual technique but higher than that of testing several samples simultaneously. The total cost of unexpected antibody screening using an automated analyzer was less than that using the manual method. ABO/Rh(D) typing using an automated analyzer incurs a lower unit value and cost than that using the manual technique when only one sample is tested at a time. Unexpected antibody screening using an automated analyzer always incurs a lower unit value and cost than that using the manual technique.

  9. Disruption of Radiologist Workflow.

    PubMed

    Kansagra, Akash P; Liu, Kevin; Yu, John-Paul J

    2016-01-01

    The effect of disruptions has been studied extensively in surgery and emergency medicine, and a number of solutions-such as preoperative checklists-have been implemented to enforce the integrity of critical safety-related workflows. Disruptions of the highly complex and cognitively demanding workflow of modern clinical radiology have only recently attracted attention as a potential safety hazard. In this article, we describe the variety of disruptions that arise in the reading room environment, review approaches that other specialties have taken to mitigate workflow disruption, and suggest possible solutions for workflow improvement in radiology. Copyright © 2015 Mosby, Inc. All rights reserved.

  10. Workflow based framework for life science informatics.

    PubMed

    Tiwari, Abhishek; Sekhar, Arvind K T

    2007-10-01

    Workflow technology is a generic mechanism to integrate diverse types of available resources (databases, servers, software applications and different services) which facilitate knowledge exchange within traditionally divergent fields such as molecular biology, clinical research, computational science, physics, chemistry and statistics. Researchers can easily incorporate and access diverse, distributed tools and data to develop their own research protocols for scientific analysis. Application of workflow technology has been reported in areas like drug discovery, genomics, large-scale gene expression analysis, proteomics, and system biology. In this article, we have discussed the existing workflow systems and the trends in applications of workflow based systems.

  11. Omics Metadata Management Software (OMMS).

    PubMed

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.

  12. Matrix-assisted laser desorption ionization-time of flight mass spectrometry: a fundamental shift in the routine practice of clinical microbiology.

    PubMed

    Clark, Andrew E; Kaleta, Erin J; Arora, Amit; Wolk, Donna M

    2013-07-01

    Within the past decade, clinical microbiology laboratories experienced revolutionary changes in the way in which microorganisms are identified, moving away from slow, traditional microbial identification algorithms toward rapid molecular methods and mass spectrometry (MS). Historically, MS was clinically utilized as a high-complexity method adapted for protein-centered analysis of samples in chemistry and hematology laboratories. Today, matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) MS is adapted for use in microbiology laboratories, where it serves as a paradigm-shifting, rapid, and robust method for accurate microbial identification. Multiple instrument platforms, marketed by well-established manufacturers, are beginning to displace automated phenotypic identification instruments and in some cases genetic sequence-based identification practices. This review summarizes the current position of MALDI-TOF MS in clinical research and in diagnostic clinical microbiology laboratories and serves as a primer to examine the "nuts and bolts" of MALDI-TOF MS, highlighting research associated with sample preparation, spectral analysis, and accuracy. Currently available MALDI-TOF MS hardware and software platforms that support the use of MALDI-TOF with direct and precultured specimens and integration of the technology into the laboratory workflow are also discussed. Finally, this review closes with a prospective view of the future of MALDI-TOF MS in the clinical microbiology laboratory to accelerate diagnosis and microbial identification to improve patient care.

  13. Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry: a Fundamental Shift in the Routine Practice of Clinical Microbiology

    PubMed Central

    Clark, Andrew E.; Kaleta, Erin J.; Arora, Amit

    2013-01-01

    SUMMARY Within the past decade, clinical microbiology laboratories experienced revolutionary changes in the way in which microorganisms are identified, moving away from slow, traditional microbial identification algorithms toward rapid molecular methods and mass spectrometry (MS). Historically, MS was clinically utilized as a high-complexity method adapted for protein-centered analysis of samples in chemistry and hematology laboratories. Today, matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) MS is adapted for use in microbiology laboratories, where it serves as a paradigm-shifting, rapid, and robust method for accurate microbial identification. Multiple instrument platforms, marketed by well-established manufacturers, are beginning to displace automated phenotypic identification instruments and in some cases genetic sequence-based identification practices. This review summarizes the current position of MALDI-TOF MS in clinical research and in diagnostic clinical microbiology laboratories and serves as a primer to examine the “nuts and bolts” of MALDI-TOF MS, highlighting research associated with sample preparation, spectral analysis, and accuracy. Currently available MALDI-TOF MS hardware and software platforms that support the use of MALDI-TOF with direct and precultured specimens and integration of the technology into the laboratory workflow are also discussed. Finally, this review closes with a prospective view of the future of MALDI-TOF MS in the clinical microbiology laboratory to accelerate diagnosis and microbial identification to improve patient care. PMID:23824373

  14. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  15. [Pre-analytical stage for biomarker assessment in breast cancer: 2014 update of the GEFPICS' guidelines in France].

    PubMed

    MacGrogan, Gaëtan; Mathieu, Marie-Christine; Poulet, Bruno; Penault-Llorca, Frédérique; Vincent-Salomon, Anne; Roger, Pascal; Treilleux, Isabelle; Valent, Alexander; Antoine, Martine; Becette, Véronique; Bor, Catherine; Brabencova, Eva; Charafe-Jauffret, Emmanuelle; Chenard, Marie-Pierre; Dauplat, Marie-Mélanie; Delrée, Paul; Devouassoux, Mojgan; Fiche, Maryse; Fondrevelle, Marie-Eve; Fridman, Viviana; Garbar, Christian; Genin, Pascal; Ghnassia, Jean-Pierre; Haudebourg, Juliette; Laberge-Le Couteulx, Sophie; Loussouarn, Delphine; Maran-Gonzalez, Aurélie; Marcy, Myriam; Michenet, Patrick; Sagan, Christine; Trassard, Martine; Verriele, Véronique; Arnould, Laurent; Lacroix-Triki, Magali

    2014-10-01

    Biomarker assessment of breast cancer tumor samples is part of the routine workflow of pathology laboratories. International guidelines have recently been updated, with special regards to the pre-analytical steps that are critical for the quality of immunohistochemical and in situ hybridization procedures, whatever the biomarker analyzed. Fixation and specimen handling protocols must be standardized, validated and carefully tracked. Cooperation and training of the personnel involved in the specimen workflow (e.g. radiologists, surgeons, nurses, technicians and pathologists) are of paramount importance. The GEFPICS' update of the recommendations herein details and comments the different steps of the pre-analytical process. Application of these guidelines and participation to quality insurance programs are mandatory to ensure the correct evaluation of oncotheranostic biomarkers. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  16. Workflows for microarray data processing in the Kepler environment.

    PubMed

    Stropp, Thomas; McPhillips, Timothy; Ludäscher, Bertram; Bieda, Mark

    2012-05-17

    Microarray data analysis has been the subject of extensive and ongoing pipeline development due to its complexity, the availability of several options at each analysis step, and the development of new analysis demands, including integration with new data sources. Bioinformatics pipelines are usually custom built for different applications, making them typically difficult to modify, extend and repurpose. Scientific workflow systems are intended to address these issues by providing general-purpose frameworks in which to develop and execute such pipelines. The Kepler workflow environment is a well-established system under continual development that is employed in several areas of scientific research. Kepler provides a flexible graphical interface, featuring clear display of parameter values, for design and modification of workflows. It has capabilities for developing novel computational components in the R, Python, and Java programming languages, all of which are widely used for bioinformatics algorithm development, along with capabilities for invoking external applications and using web services. We developed a series of fully functional bioinformatics pipelines addressing common tasks in microarray processing in the Kepler workflow environment. These pipelines consist of a set of tools for GFF file processing of NimbleGen chromatin immunoprecipitation on microarray (ChIP-chip) datasets and more comprehensive workflows for Affymetrix gene expression microarray bioinformatics and basic primer design for PCR experiments, which are often used to validate microarray results. Although functional in themselves, these workflows can be easily customized, extended, or repurposed to match the needs of specific projects and are designed to be a toolkit and starting point for specific applications. These workflows illustrate a workflow programming paradigm focusing on local resources (programs and data) and therefore are close to traditional shell scripting or R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.

  17. Dynamic reusable workflows for ocean science

    USGS Publications Warehouse

    Signell, Richard; Fernandez, Filipe; Wilcox, Kyle

    2016-01-01

    Digital catalogs of ocean data have been available for decades, but advances in standardized services and software for catalog search and data access make it now possible to create catalog-driven workflows that automate — end-to-end — data search, analysis and visualization of data from multiple distributed sources. Further, these workflows may be shared, reused and adapted with ease. Here we describe a workflow developed within the US Integrated Ocean Observing System (IOOS) which automates the skill-assessment of water temperature forecasts from multiple ocean forecast models, allowing improved forecast products to be delivered for an open water swim event. A series of Jupyter Notebooks are used to capture and document the end-to-end workflow using a collection of Python tools that facilitate working with standardized catalog and data services. The workflow first searches a catalog of metadata using the Open Geospatial Consortium (OGC) Catalog Service for the Web (CSW), then accesses data service endpoints found in the metadata records using the OGC Sensor Observation Service (SOS) for in situ sensor data and OPeNDAP services for remotely-sensed and model data. Skill metrics are computed and time series comparisons of forecast model and observed data are displayed interactively, leveraging the capabilities of modern web browsers. The resulting workflow not only solves a challenging specific problem, but highlights the benefits of dynamic, reusable workflows in general. These workflows adapt as new data enters the data system, facilitate reproducible science, provide templates from which new scientific workflows can be developed, and encourage data providers to use standardized services. As applied to the ocean swim event, the workflow exposed problems with two of the ocean forecast products which led to improved regional forecasts once errors were corrected. While the example is specific, the approach is general, and we hope to see increased use of dynamic notebooks across the geoscience domains.

  18. Deploying and sharing U-Compare workflows as web services.

    PubMed

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  19. Deploying and sharing U-Compare workflows as web services

    PubMed Central

    2013-01-01

    Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017

  20. Health information exchange technology on the front lines of healthcare: workflow factors and patterns of use

    PubMed Central

    Johnson, Kevin B; Lorenzi, Nancy M

    2011-01-01

    Objective The goal of this study was to develop an in-depth understanding of how a health information exchange (HIE) fits into clinical workflow at multiple clinical sites. Materials and Methods The ethnographic qualitative study was conducted over a 9-month period in six emergency departments (ED) and eight ambulatory clinics in Memphis, Tennessee, USA. Data were collected using direct observation, informal interviews during observation, and formal semi-structured interviews. The authors observed for over 180 h, during which providers used the exchange 130 times. Results HIE-related workflow was modeled for each ED site and ambulatory clinic group and substantial site-to-site workflow differences were identified. Common patterns in HIE-related workflow were also identified across all sites, leading to the development of two role-based workflow models: nurse based and physician based. The workflow elements framework was applied to the two role-based patterns. An in-depth description was developed of how providers integrated HIE into existing clinical workflow, including prompts for HIE use. Discussion Workflow differed substantially among sites, but two general role-based HIE usage models were identified. Although providers used HIE to improve continuity of patient care, patient–provider trust played a significant role. Types of information retrieved related to roles, with nurses seeking to retrieve recent hospitalization data and more open-ended usage by nurse practitioners and physicians. User and role-specific customization to accommodate differences in workflow and information needs may increase the adoption and use of HIE. Conclusion Understanding end users' perspectives towards HIE technology is crucial to the long-term success of HIE. By applying qualitative methods, an in-depth understanding of HIE usage was developed. PMID:22003156

  1. Workflows in bioinformatics: meta-analysis and prototype implementation of a workflow generator.

    PubMed

    Garcia Castro, Alexander; Thoraval, Samuel; Garcia, Leyla J; Ragan, Mark A

    2005-04-07

    Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces) in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results) can be reproduced or shared among users. http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive), ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download). From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous analytical tools.

  2. Provenance-Powered Automatic Workflow Generation and Composition

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.

    2015-12-01

    In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.

  3. Support for Taverna workflows in the VPH-Share cloud platform.

    PubMed

    Kasztelnik, Marek; Coto, Ernesto; Bubak, Marian; Malawski, Maciej; Nowakowski, Piotr; Arenas, Juan; Saglimbeni, Alfredo; Testi, Debora; Frangi, Alejandro F

    2017-07-01

    To address the increasing need for collaborative endeavours within the Virtual Physiological Human (VPH) community, the VPH-Share collaborative cloud platform allows researchers to expose and share sequences of complex biomedical processing tasks in the form of computational workflows. The Taverna Workflow System is a very popular tool for orchestrating complex biomedical & bioinformatics processing tasks in the VPH community. This paper describes the VPH-Share components that support the building and execution of Taverna workflows, and explains how they interact with other VPH-Share components to improve the capabilities of the VPH-Share platform. Taverna workflow support is delivered by the Atmosphere cloud management platform and the VPH-Share Taverna plugin. These components are explained in detail, along with the two main procedures that were developed to enable this seamless integration: workflow composition and execution. 1) Seamless integration of VPH-Share with other components and systems. 2) Extended range of different tools for workflows. 3) Successful integration of scientific workflows from other VPH projects. 4) Execution speed improvement for medical applications. The presented workflow integration provides VPH-Share users with a wide range of different possibilities to compose and execute workflows, such as desktop or online composition, online batch execution, multithreading, remote execution, etc. The specific advantages of each supported tool are presented, as are the roles of Atmosphere and the VPH-Share plugin within the VPH-Share project. The combination of the VPH-Share plugin and Atmosphere engenders the VPH-Share infrastructure with far more flexible, powerful and usable capabilities for the VPH-Share community. As both components can continue to evolve and improve independently, we acknowledge that further improvements are still to be developed and will be described. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A cyber-enabled spatial decision support system to inventory Mangroves in Mozambique: coupling scientific workflows and cloud computing

    Treesearch

    Wenwu Tang; Wenpeng Feng; Meijuan Jia; Jiyang Shi; Huifang Zuo; Christina E. Stringer; Carl C. Trettin

    2017-01-01

    Mangroves are an important terrestrial carbon reservoir with numerous ecosystem services. Yet, it is difficult to inventory mangroves because of their low accessibility. A sampling approach that produces accurate assessment while maximizing logistical integrity of inventory operation is often required. Spatial decision support systems (SDSSs) provide support for...

  5. Identifying impact of software dependencies on replicability of biomedical workflows.

    PubMed

    Miksa, Tomasz; Rauber, Andreas; Mina, Eleni

    2016-12-01

    Complex data driven experiments form the basis of biomedical research. Recent findings warn that the context in which the software is run, that is the infrastructure and the third party dependencies, can have a crucial impact on the final results delivered by a computational experiment. This implies that in order to replicate the same result, not only the same data must be used, but also it must be run on an equivalent software stack. In this paper we present the VFramework that enables assessing replicability of workflows. It identifies whether any differences in software dependencies among two executions of the same workflow exist and whether they have impact on the produced results. We also conduct a case study in which we investigate the impact of software dependencies on replicability of Taverna workflows used in biomedical research of Huntington's disease. We re-execute analysed workflows in environments differing in operating system distribution and configuration. The results show that the VFramework can be used to identify the impact of software dependencies on the replicability of biomedical workflows. Furthermore, we observe that despite the fact that the workflows are executed in a controlled environment, they still depend on specific tools installed in the environment. The context model used by the VFramework improves the deficiencies of provenance traces and documents also such tools. Based on our findings we define guidelines for workflow owners that enable them to improve replicability of their workflows. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    PubMed

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-11-28

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. All local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  7. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    PubMed

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-03-01

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. The all local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  8. Digital Curation of Earth Science Samples Starts in the Field

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Hsu, L.; Song, L.; Carter, M. R.

    2014-12-01

    Collection of physical samples in the field is an essential part of research in the Earth Sciences. Samples provide a basis for progress across many disciplines, from the study of global climate change now and over the Earth's history, to present and past biogeochemical cycles, to magmatic processes and mantle dynamics. The types of samples, methods of collection, and scope and scale of sampling campaigns are highly diverse, ranging from large-scale programs to drill rock and sediment cores on land, in lakes, and in the ocean, to environmental observation networks with continuous sampling, to single investigator or small team expeditions to remote areas around the globe or trips to local outcrops. Cyberinfrastructure for sample-related fieldwork needs to cater to the different needs of these diverse sampling activities, aligning with specific workflows, regional constraints such as connectivity or climate, and processing of samples. In general, digital tools should assist with capture and management of metadata about the sampling process (location, time, method) and the sample itself (type, dimension, context, images, etc.), management of the physical objects (e.g., sample labels with QR codes), and the seamless transfer of sample metadata to data systems and software relevant to the post-sampling data acquisition, data processing, and sample curation. In order to optimize CI capabilities for samples, tools and workflows need to adopt community-based standards and best practices for sample metadata, classification, identification and registration. This presentation will provide an overview and updates of several ongoing efforts that are relevant to the development of standards for digital sample management: the ODM2 project that has generated an information model for spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples, aligned with OGC's Observation & Measurements model (Horsburgh et al, AGU FM 2014); implementation of the IGSN (International Geo Sample Number) as a globally unique sample identifier via a distributed system of allocating agents and a central registry; and the EarthCube Research Coordination Network iSamplES (Internet of Samples in the Earth Sciences) that aims to improve sharing and curation of samples through the use of CI.

  9. Impact of Robotic Antineoplastic Preparation on Safety, Workflow, and Costs

    PubMed Central

    Seger, Andrew C.; Churchill, William W.; Keohane, Carol A.; Belisle, Caryn D.; Wong, Stephanie T.; Sylvester, Katelyn W.; Chesnick, Megan A.; Burdick, Elisabeth; Wien, Matt F.; Cotugno, Michael C.; Bates, David W.; Rothschild, Jeffrey M.

    2012-01-01

    Purpose: Antineoplastic preparation presents unique safety concerns and consumes significant pharmacy staff time and costs. Robotic antineoplastic and adjuvant medication compounding may provide incremental safety and efficiency advantages compared with standard pharmacy practices. Methods: We conducted a direct observation trial in an academic medical center pharmacy to compare the effects of usual/manual antineoplastic and adjuvant drug preparation (baseline period) with robotic preparation (intervention period). The primary outcomes were serious medication errors and staff safety events with the potential for harm of patients and staff, respectively. Secondary outcomes included medication accuracy determined by gravimetric techniques, medication preparation time, and the costs of both ancillary materials used during drug preparation and personnel time. Results: Among 1,421 and 972 observed medication preparations, we found nine (0.7%) and seven (0.7%) serious medication errors (P = .8) and 73 (5.1%) and 28 (2.9%) staff safety events (P = .007) in the baseline and intervention periods, respectively. Drugs failed accuracy measurements in 12.5% (23 of 184) and 0.9% (one of 110) of preparations in the baseline and intervention periods, respectively (P < .001). Mean drug preparation time increased by 47% when using the robot (P = .009). Labor costs were similar in both study periods, although the ancillary material costs decreased by 56% in the intervention period (P < .001). Conclusion: Although robotically prepared antineoplastic and adjuvant medications did not reduce serious medication errors, both staff safety and accuracy of medication preparation were improved significantly. Future studies are necessary to address the overall cost effectiveness of these robotic implementations. PMID:23598843

  10. A Model of Workflow Composition for Emergency Management

    NASA Astrophysics Data System (ADS)

    Xin, Chen; Bin-ge, Cui; Feng, Zhang; Xue-hui, Xu; Shan-shan, Fu

    The common-used workflow technology is not flexible enough in dealing with concurrent emergency situations. The paper proposes a novel model for defining emergency plans, in which workflow segments appear as a constituent part. A formal abstraction, which contains four operations, is defined to compose workflow segments under constraint rule. The software system of the business process resources construction and composition is implemented and integrated into Emergency Plan Management Application System.

  11. A Framework for Modeling Workflow Execution by an Interdisciplinary Healthcare Team.

    PubMed

    Kezadri-Hamiaz, Mounira; Rosu, Daniela; Wilk, Szymon; Kuziemsky, Craig; Michalowski, Wojtek; Carrier, Marc

    2015-01-01

    The use of business workflow models in healthcare is limited because of insufficient capture of complexities associated with behavior of interdisciplinary healthcare teams that execute healthcare workflows. In this paper we present a novel framework that builds on the well-founded business workflow model formalism and related infrastructures and introduces a formal semantic layer that describes selected aspects of team dynamics and supports their real-time operationalization.

  12. Practical issues in implementing whole-genome-sequencing in routine diagnostic microbiology.

    PubMed

    Rossen, J W A; Friedrich, A W; Moran-Gilad, J

    2018-04-01

    Next generation sequencing (NGS) is increasingly being used in clinical microbiology. Like every new technology adopted in microbiology, the integration of NGS into clinical and routine workflows must be carefully managed. To review the practical aspects of implementing bacterial whole genome sequencing (WGS) in routine diagnostic laboratories. Review of the literature and expert opinion. In this review, we discuss when and how to integrate whole genome sequencing (WGS) in the routine workflow of the clinical laboratory. In addition, as the microbiology laboratories have to adhere to various national and international regulations and criteria for their accreditation, we deliberate on quality control issues for using WGS in microbiology, including the importance of proficiency testing. Furthermore, the current and future place of this technology in the diagnostic hierarchy of microbiology is described as well as the necessity of maintaining backwards compatibility with already established methods. Finally, we speculate on the question of whether WGS can entirely replace routine microbiology in the future and the tension between the fact that most sequencers are designed to process multiple samples in parallel whereas for optimal diagnosis a one-by-one processing of the samples is preferred. Special reference is made to the cost and turnaround time of WGS in diagnostic laboratories. Further development is required to improve the workflow for WGS, in particular to shorten the turnaround time, reduce costs, and streamline downstream data analyses. Only when these processes reach maturity will reliance on WGS for routine patient management and infection control management become feasible, enabling the transformation of clinical microbiology into a genome-based and personalized diagnostic field. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Design and implementation of a secure workflow system based on PKI/PMI

    NASA Astrophysics Data System (ADS)

    Yan, Kai; Jiang, Chao-hui

    2013-03-01

    As the traditional workflow system in privilege management has the following weaknesses: low privilege management efficiency, overburdened for administrator, lack of trust authority etc. A secure workflow model based on PKI/PMI is proposed after studying security requirements of the workflow systems in-depth. This model can achieve static and dynamic authorization after verifying user's ID through PKC and validating user's privilege information by using AC in workflow system. Practice shows that this system can meet the security requirements of WfMS. Moreover, it can not only improve system security, but also ensures integrity, confidentiality, availability and non-repudiation of the data in the system.

  14. Process Mining for Individualized Behavior Modeling Using Wireless Tracking in Nursing Homes

    PubMed Central

    Fernández-Llatas, Carlos; Benedi, José-Miguel; García-Gómez, Juan M.; Traver, Vicente

    2013-01-01

    The analysis of human behavior patterns is increasingly used for several research fields. The individualized modeling of behavior using classical techniques requires too much time and resources to be effective. A possible solution would be the use of pattern recognition techniques to automatically infer models to allow experts to understand individual behavior. However, traditional pattern recognition algorithms infer models that are not readily understood by human experts. This limits the capacity to benefit from the inferred models. Process mining technologies can infer models as workflows, specifically designed to be understood by experts, enabling them to detect specific behavior patterns in users. In this paper, the eMotiva process mining algorithms are presented. These algorithms filter, infer and visualize workflows. The workflows are inferred from the samples produced by an indoor location system that stores the location of a resident in a nursing home. The visualization tool is able to compare and highlight behavior patterns in order to facilitate expert understanding of human behavior. This tool was tested with nine real users that were monitored for a 25-week period. The results achieved suggest that the behavior of users is continuously evolving and changing and that this change can be measured, allowing for behavioral change detection. PMID:24225907

  15. New hardware and workflows for semi-automated correlative cryo-fluorescence and cryo-electron microscopy/tomography.

    PubMed

    Schorb, Martin; Gaechter, Leander; Avinoam, Ori; Sieckmann, Frank; Clarke, Mairi; Bebeacua, Cecilia; Bykov, Yury S; Sonnen, Andreas F-P; Lihl, Reinhard; Briggs, John A G

    2017-02-01

    Correlative light and electron microscopy allows features of interest defined by fluorescence signals to be located in an electron micrograph of the same sample. Rare dynamic events or specific objects can be identified, targeted and imaged by electron microscopy or tomography. To combine it with structural studies using cryo-electron microscopy or tomography, fluorescence microscopy must be performed while maintaining the specimen vitrified at liquid-nitrogen temperatures and in a dry environment during imaging and transfer. Here we present instrumentation, software and an experimental workflow that improves the ease of use, throughput and performance of correlated cryo-fluorescence and cryo-electron microscopy. The new cryo-stage incorporates a specially modified high-numerical aperture objective lens and provides a stable and clean imaging environment. It is combined with a transfer shuttle for contamination-free loading of the specimen. Optimized microscope control software allows automated acquisition of the entire specimen area by cryo-fluorescence microscopy. The software also facilitates direct transfer of the fluorescence image and associated coordinates to the cryo-electron microscope for subsequent fluorescence-guided automated imaging. Here we describe these technological developments and present a detailed workflow, which we applied for automated cryo-electron microscopy and tomography of various specimens. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Introducing students to digital geological mapping: A workflow based on cheap hardware and free software

    NASA Astrophysics Data System (ADS)

    Vrabec, Marko; Dolžan, Erazem

    2016-04-01

    The undergraduate field course in Geological Mapping at the University of Ljubljana involves 20-40 students per year, which precludes the use of specialized rugged digital field equipment as the costs would be way beyond the capabilities of the Department. A different mapping area is selected each year with the aim to provide typical conditions that a professional geologist might encounter when doing fieldwork in Slovenia, which includes rugged relief, dense tree cover, and moderately-well- to poorly-exposed bedrock due to vegetation and urbanization. It is therefore mandatory that the digital tools and workflows are combined with classical methods of fieldwork, since, for example, full-time precise GNSS positioning is not viable under such circumstances. Additionally, due to the prevailing combination of complex geological structure with generally poor exposure, students cannot be expected to produce line (vector) maps of geological contacts on the go, so there is no need for such functionality in hardware and software that we use in the field. Our workflow therefore still relies on paper base maps, but is strongly complemented with digital tools to provide robust positioning, track recording, and acquisition of various point-based data. Primary field hardware are students' Android-based smartphones and optionally tablets. For our purposes, the built-in GNSS chips provide adequate positioning precision most of the time, particularly if they are GLONASS-capable. We use Oruxmaps, a powerful free offline map viewer for the Android platform, which facilitates the use of custom-made geopositioned maps. For digital base maps, which we prepare in free Windows QGIS software, we use scanned topographic maps provided by the National Geodetic Authority, but also other maps such as aerial imagery, processed Digital Elevation Models, scans of existing geological maps, etc. Point data, like important outcrop locations or structural measurements, are entered into Oruxmaps as waypoints. Students are also encouraged to directly measure structural data with specialized Android apps such as the MVE FieldMove Clino. Digital field data is exported from Oruxmaps to Windows computers primarily in the ubiquitous GPX data format and then integrated in the QGIS environment. Recorded GPX tracks are also used with the free Geosetter Windows software to geoposition and tag any digital photographs taken in the field. With minimal expenses, our workflow provides the students with basic familiarity and experience in using digital field tools and methods. The workflow is also practical enough for the prevailing field conditions of Slovenia that the faculty staff is using it in geological mapping for scientific research and consultancy work.

  17. Quality Metadata Management for Geospatial Scientific Workflows: from Retrieving to Assessing with Online Tools

    NASA Astrophysics Data System (ADS)

    Leibovici, D. G.; Pourabdollah, A.; Jackson, M.

    2011-12-01

    Experts and decision-makers use or develop models to monitor global and local changes of the environment. Their activities require the combination of data and processing services in a flow of operations and spatial data computations: a geospatial scientific workflow. The seamless ability to generate, re-use and modify a geospatial scientific workflow is an important requirement but the quality of outcomes is equally much important [1]. Metadata information attached to the data and processes, and particularly their quality, is essential to assess the reliability of the scientific model that represents a workflow [2]. Managing tools, dealing with qualitative and quantitative metadata measures of the quality associated with a workflow, are, therefore, required for the modellers. To ensure interoperability, ISO and OGC standards [3] are to be adopted, allowing for example one to define metadata profiles and to retrieve them via web service interfaces. However these standards need a few extensions when looking at workflows, particularly in the context of geoprocesses metadata. We propose to fill this gap (i) at first through the provision of a metadata profile for the quality of processes, and (ii) through providing a framework, based on XPDL [4], to manage the quality information. Web Processing Services are used to implement a range of metadata analyses on the workflow in order to evaluate and present quality information at different levels of the workflow. This generates the metadata quality, stored in the XPDL file. The focus is (a) on the visual representations of the quality, summarizing the retrieved quality information either from the standardized metadata profiles of the components or from non-standard quality information e.g., Web 2.0 information, and (b) on the estimated qualities of the outputs derived from meta-propagation of uncertainties (a principle that we have introduced [5]). An a priori validation of the future decision-making supported by the outputs of the workflow once run, is then provided using the meta-propagated qualities, obtained without running the workflow [6], together with the visualization pointing out the need to improve the workflow with better data or better processes on the workflow graph itself. [1] Leibovici, DG, Hobona, G Stock, K Jackson, M (2009) Qualifying geospatial workfow models for adaptive controlled validity and accuracy. In: IEEE 17th GeoInformatics, 1-5 [2] Leibovici, DG, Pourabdollah, A (2010a) Workflow Uncertainty using a Metamodel Framework and Metadata for Data and Processes. OGC TC/PC Meetings, September 2010, Toulouse, France [3] OGC (2011) www.opengeospatial.org [4] XPDL (2008) Workflow Process Definition Interface - XML Process Definition Language.Workflow Management Coalition, Document WfMC-TC-1025, 2008 [5] Leibovici, DG Pourabdollah, A Jackson, M (2011) Meta-propagation of Uncertainties for Scientific Workflow Management in Interoperable Spatial Data Infrastructures. In: Proceedings of the European Geosciences Union (EGU2011), April 2011, Austria [6] Pourabdollah, A Leibovici, DG Jackson, M (2011) MetaPunT: an Open Source tool for Meta-Propagation of uncerTainties in Geospatial Processing. In: Proceedings of OSGIS2011, June 2011, Nottingham, UK

  18. P185-M Protein Identification and Validation of Results in Workflows that Integrate over Various Instruments, Datasets, Search Engines

    PubMed Central

    Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.

    2007-01-01

    Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.

  19. A streamlined workflow for single-cells genome-wide copy-number profiling by low-pass sequencing of LM-PCR whole-genome amplification products.

    PubMed

    Ferrarini, Alberto; Forcato, Claudio; Buson, Genny; Tononi, Paola; Del Monaco, Valentina; Terracciano, Mario; Bolognesi, Chiara; Fontana, Francesca; Medoro, Gianni; Neves, Rui; Möhlendick, Birte; Rihawi, Karim; Ardizzoni, Andrea; Sumanasuriya, Semini; Flohr, Penny; Lambros, Maryou; de Bono, Johann; Stoecklein, Nikolas H; Manaresi, Nicolò

    2018-01-01

    Chromosomal instability and associated chromosomal aberrations are hallmarks of cancer and play a critical role in disease progression and development of resistance to drugs. Single-cell genome analysis has gained interest in latest years as a source of biomarkers for targeted-therapy selection and drug resistance, and several methods have been developed to amplify the genomic DNA and to produce libraries suitable for Whole Genome Sequencing (WGS). However, most protocols require several enzymatic and cleanup steps, thus increasing the complexity and length of protocols, while robustness and speed are key factors for clinical applications. To tackle this issue, we developed a single-tube, single-step, streamlined protocol, exploiting ligation mediated PCR (LM-PCR) Whole Genome Amplification (WGA) method, for low-pass genome sequencing with the Ion Torrent™ platform and copy number alterations (CNAs) calling from single cells. The method was evaluated on single cells isolated from 6 aberrant cell lines of the NCI-H series. In addition, to demonstrate the feasibility of the workflow on clinical samples, we analyzed single circulating tumor cells (CTCs) and white blood cells (WBCs) isolated from the blood of patients affected by prostate cancer or lung adenocarcinoma. The results obtained show that the developed workflow generates data accurately representing whole genome absolute copy number profiles of single cell and allows alterations calling at resolutions down to 100 Kbp with as few as 200,000 reads. The presented data demonstrate the feasibility of the Ampli1™ WGA-based low-pass workflow for detection of CNAs in single tumor cells which would be of particular interest for genome-driven targeted therapy selection and for monitoring of disease progression.

  20. Worklist handling in workflow-enabled radiological application systems

    NASA Astrophysics Data System (ADS)

    Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens

    2000-05-01

    For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.

  1. Nanocuration workflows: Establishing best practices for identifying, inputting, and sharing data to inform decisions on nanomaterials

    PubMed Central

    Powers, Christina M; Mills, Karmann A; Morris, Stephanie A; Klaessig, Fred; Gaheen, Sharon; Lewinski, Nastassja

    2015-01-01

    Summary There is a critical opportunity in the field of nanoscience to compare and integrate information across diverse fields of study through informatics (i.e., nanoinformatics). This paper is one in a series of articles on the data curation process in nanoinformatics (nanocuration). Other articles in this series discuss key aspects of nanocuration (temporal metadata, data completeness, database integration), while the focus of this article is on the nanocuration workflow, or the process of identifying, inputting, and reviewing nanomaterial data in a data repository. In particular, the article discusses: 1) the rationale and importance of a defined workflow in nanocuration, 2) the influence of organizational goals or purpose on the workflow, 3) established workflow practices in other fields, 4) current workflow practices in nanocuration, 5) key challenges for workflows in emerging fields like nanomaterials, 6) examples to make these challenges more tangible, and 7) recommendations to address the identified challenges. Throughout the article, there is an emphasis on illustrating key concepts and current practices in the field. Data on current practices in the field are from a group of stakeholders active in nanocuration. In general, the development of workflows for nanocuration is nascent, with few individuals formally trained in data curation or utilizing available nanocuration resources (e.g., ISA-TAB-Nano). Additional emphasis on the potential benefits of cultivating nanomaterial data via nanocuration processes (e.g., capability to analyze data from across research groups) and providing nanocuration resources (e.g., training) will likely prove crucial for the wider application of nanocuration workflows in the scientific community. PMID:26425437

  2. Implementation and Evaluation of a Fully Automated Multiplex Real-Time PCR Assay on the BD Max Platform to Detect and Differentiate Herpesviridae from Cerebrospinal Fluids

    PubMed Central

    Köller, Thomas; Kurze, Daniel; Lange, Mirjam; Scherdin, Martin; Podbielski, Andreas; Warnke, Philipp

    2016-01-01

    A fully automated multiplex real-time PCR assay—including a sample process control and a plasmid based positive control—for the detection and differentiation of herpes simplex virus 1 (HSV1), herpes simplex virus 2 (HSV2) and varicella-zoster virus (VZV) from cerebrospinal fluids (CSF) was developed on the BD Max platform. Performance was compared to an established accredited multiplex real time PCR protocol utilizing the easyMAG and the LightCycler 480/II, both very common devices in viral molecular diagnostics. For clinical validation, 123 CSF specimens and 40 reference samples from national interlaboratory comparisons were examined with both methods, resulting in 97.6% and 100% concordance for CSF and reference samples, respectively. Utilizing the BD Max platform revealed sensitivities of 173 (CI 95%, 88–258) copies/ml for HSV1, 171 (CI 95%, 148–194) copies/ml for HSV2 and 84 (CI 95%, 5–163) copies/ml for VZV. Cross reactivity could be excluded by checking 25 common viral, bacterial and fungal human pathogens. Workflow analyses displayed shorter test duration as well as remarkable fewer and easier preparation steps with the potential to reduce error rates occurring when manually assessing patient samples. This protocol allows for a fully automated PCR assay on the BD Max platform for the simultaneously detection of herpesviridae from CSF specimens. Singular or multiple infections due to HSV1, HSV2 and VZV can reliably be differentiated with good sensitivities. Control parameters are included within the assay, thereby rendering its suitability for current quality management requirements. PMID:27092772

  3. Automated High-Throughput Permethylation for Glycosylation Analysis of Biologics Using MALDI-TOF-MS.

    PubMed

    Shubhakar, Archana; Kozak, Radoslaw P; Reiding, Karli R; Royle, Louise; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    2016-09-06

    Monitoring glycoprotein therapeutics for changes in glycosylation throughout the drug's life cycle is vital, as glycans significantly modulate the stability, biological activity, serum half-life, safety, and immunogenicity. Biopharma companies are increasingly adopting Quality by Design (QbD) frameworks for measuring, optimizing, and controlling drug glycosylation. Permethylation of glycans prior to analysis by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) is a valuable tool for glycan characterization and for screening of large numbers of samples in QbD drug realization. However, the existing protocols for manual permethylation and liquid-liquid extraction (LLE) steps are labor intensive and are thus not practical for high-throughput (HT) studies. Here we present a glycan permethylation protocol, based on 96-well microplates, that has been developed into a kit suitable for HT work. The workflow is largely automated using a liquid handling robot and includes N-glycan release, enrichment of N-glycans, permethylation, and LLE. The kit has been validated according to industry analytical performance guidelines and applied to characterize biopharmaceutical samples, including IgG4 monoclonal antibodies (mAbs) and recombinant human erythropoietin (rhEPO). The HT permethylation enabled glycan characterization and relative quantitation with minimal side reactions: the MALDI-TOF-MS profiles obtained were in good agreement with hydrophilic liquid interaction chromatography (HILIC) and ultrahigh performance liquid chromatography (UHPLC) data. Automated permethylation and extraction of 96 glycan samples was achieved in less than 5 h and automated data acquisition on MALDI-TOF-MS took on average less than 1 min per sample. This automated and HT glycan preparation and permethylation showed to be convenient, fast, and reliable and can be applied for drug glycan profiling and clinical glycan biomarker studies.

  4. Implementing bioinformatic workflows within the bioextract server

    USDA-ARS?s Scientific Manuscript database

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  5. Coupling of a continuum ice sheet model and a discrete element calving model using a scientific workflow system

    NASA Astrophysics Data System (ADS)

    Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut

    2017-04-01

    Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).

  6. Data Integration Tool: Permafrost Data Debugging

    NASA Astrophysics Data System (ADS)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.

    2017-12-01

    We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.

  7. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  8. Processes in scientific workflows for information seeking related to physical sample materials

    NASA Astrophysics Data System (ADS)

    Ramdeen, S.

    2014-12-01

    The majority of State Geological Surveys have repositories containing cores, cuttings, fossils or other physical sample material. State surveys maintain these collections to support their own research as well as the research conducted by external users from other organizations. This includes organizations such as government agencies (state and federal), academia, industry and the public. The preliminary results presented in this paper will look at the research processes of these external users. In particular: how they discover, access and use digital surrogates, which they use to evaluate and access physical items in these collections. Data such as physical samples are materials that cannot be completely replaced with digital surrogates. Digital surrogates may be represented as metadata, which enable discovery and ultimately access to these samples. These surrogates may be found in records, databases, publications, etc. But surrogates do not completely prevent the need for access to the physical item as they cannot be subjected to chemical testing and/or other similar analysis. The goal of this research is to document the various processes external users perform in order to access physical materials. Data for this study will be collected by conducting interviews with these external users. During the interviews, participants will be asked to describe the workflow that lead them to interact with state survey repositories, and what steps they took afterward. High level processes/categories of behavior will be identified. These processes will be used in the development of an information seeking behavior model. This model may be used to facilitate the development of management tools and other aspects of cyberinfrastructure related to physical samples.

  9. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2015-06-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2016-03-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. nmsBuilder: Freeware to create subject-specific musculoskeletal models for OpenSim.

    PubMed

    Valente, Giordano; Crimi, Gianluigi; Vanella, Nicola; Schileo, Enrico; Taddei, Fulvia

    2017-12-01

    Musculoskeletal modeling and simulations of movement have been increasingly used in orthopedic and neurological scenarios, with increased attention to subject-specific applications. In general, musculoskeletal modeling applications have been facilitated by the development of dedicated software tools; however, subject-specific studies have been limited also by time-consuming modeling workflows and high skilled expertise required. In addition, no reference tools exist to standardize the process of musculoskeletal model creation and make it more efficient. Here we present a freely available software application, nmsBuilder 2.0, to create musculoskeletal models in the file format of OpenSim, a widely-used open-source platform for musculoskeletal modeling and simulation. nmsBuilder 2.0 is the result of a major refactoring of a previous implementation that moved a first step toward an efficient workflow for subject-specific model creation. nmsBuilder includes a graphical user interface that provides access to all functionalities, based on a framework for computer-aided medicine written in C++. The operations implemented can be used in a workflow to create OpenSim musculoskeletal models from 3D surfaces. A first step includes data processing to create supporting objects necessary to create models, e.g. surfaces, anatomical landmarks, reference systems; and a second step includes the creation of OpenSim objects, e.g. bodies, joints, muscles, and the corresponding model. We present a case study using nmsBuilder 2.0: the creation of an MRI-based musculoskeletal model of the lower limb. The model included four rigid bodies, five degrees of freedom and 43 musculotendon actuators, and was created from 3D surfaces of the segmented images of a healthy subject through the modeling workflow implemented in the software application. We have presented nmsBuilder 2.0 for the creation of musculoskeletal OpenSim models from image-based data, and made it freely available via nmsbuilder.org. This application provides an efficient workflow for model creation and helps standardize the process. We hope this would help promote personalized applications in musculoskeletal biomechanics, including larger sample size studies, and might also represent a basis for future developments for specific applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Usability Testing of a National Substance Use Screening Tool Embedded in Electronic Health Records.

    PubMed

    Press, Anne; DeStio, Catherine; McCullagh, Lauren; Kapoor, Sandeep; Morley, Jeanne; Conigliaro, Joseph

    2016-07-08

    Screening, brief intervention, and referral to treatment (SBIRT) is currently being implemented into health systems nationally via paper and electronic methods. The purpose of this study was to evaluate the integration of an electronic SBIRT tool into an existing paper-based SBIRT clinical workflow in a patient-centered medical home. Usability testing was conducted in an academic ambulatory clinic. Two rounds of usability testing were done with medical office assistants (MOAs) using a paper and electronic version of the SBIRT tool, with two and four participants, respectively. Qualitative and quantitative data was analyzed to determine the impact of both tools on clinical workflow. A second round of usability testing was done with the revised electronic version and compared with the first version. Personal workflow barriers cited in the first round of testing were that the electronic health record (EHR) tool was disruptive to patient's visits. In Round 2 of testing, MOAs reported favoring the electronic version due to improved layout and the inclusion of an alert system embedded in the EHR. For example, using the system usability scale (SUS), MOAs reported a grade "1" for the statement, "I would like to use this system frequently" during the first round of testing but a "5" during the second round of analysis. The importance of testing usability of various mediums of tools used in health care screening is highlighted by the findings of this study. In the first round of testing, the electronic tool was reported as less user friendly, being difficult to navigate, and time consuming. Many issues faced in the first generation of the tool were improved in the second generation after usability was evaluated. This study demonstrates how usability testing of an electronic SBRIT tool can help to identify challenges that can impact clinical workflow. However, a limitation of this study was the small sample size of MOAs that participated. The results may have been biased to Northwell Health workers' perceptions of the SBIRT tool and their specific clinical workflow.

  13. A workflow for improving estimates of microplastic contamination in marine waters: A case study from North-Western Australia.

    PubMed

    Kroon, Frederieke; Motti, Cherie; Talbot, Sam; Sobral, Paula; Puotinen, Marji

    2018-07-01

    Plastic pollution is ubiquitous throughout the marine environment, with microplastic (i.e. <5 mm) contamination a global issue of emerging concern. The lack of universally accepted methods for quantifying microplastic contamination, including consistent application of microscopy, photography, an spectroscopy and photography, may result in unrealistic contamination estimates. Here, we present and apply an analysis workflow tailored to quantifying microplastic contamination in marine waters, incorporating stereomicroscopic visual sorting, microscopic photography and attenuated total reflectance (ATR) Fourier transform infrared (FTIR) spectroscopy. The workflow outlines step-by-step processing and associated decision making, thereby reducing bias in plastic identification and improving confidence in contamination estimates. Specific processing steps include (i) the use of a commercial algorithm-based comparison of particle spectra against an extensive commercially curated spectral library, followed by spectral interpretation to establish the chemical composition, (ii) a comparison against a customised contaminant spectral library to eliminate procedural contaminants, and (iii) final assignment of particles as either natural- or anthropogenic-derived materials, based on chemical type, a compare analysis of each particle against other particle spectra, and physical characteristics of particles. Applying this workflow to 54 tow samples collected in marine waters of North-Western Australia visually identified 248 potential anthropogenic particles. Subsequent ATR-FTIR spectroscopy, chemical assignment and visual re-inspection of photographs established 144 (58%) particles to be of anthropogenic origin. Of the original 248 particles, 97 (39%) were ultimately confirmed to be plastics, with 85 of these (34%) classified as microplastics, demonstrating that over 60% of particles may be misidentified as plastics if visual identification is not complemented by spectroscopy. Combined, this tailored analysis workflow outlines a consistent and sequential process to quantify contamination by microplastics and other anthropogenic microparticles in marine waters. Importantly, its application will contribute to more realistic estimates of microplastic contamination in marine waters, informing both ecological risk assessments and experimental concentrations in effect studies. Copyright © 2018 Australian Institute of Marine Science. Published by Elsevier Ltd.. All rights reserved.

  14. Integrating prediction, provenance, and optimization into high energy workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schram, M.; Bansal, V.; Friese, R. D.

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  15. The R-package eseis - A toolbox to weld geomorphic, seismologic, spatial, and time series analysis

    NASA Astrophysics Data System (ADS)

    Dietze, Michael

    2017-04-01

    Environmental seismology is the science of investigating the seismic signals that are emitted by Earth surface processes. This emerging field provides unique opportunities to identify, locate, track and inspect a wide range of the processes that shape our planet. Modern broadband seismometers are sensitive enough to detect signals from sources as weak as wind interacting with the ground and as powerful as collapsing mountains. This places the field of environmental seismology at the seams of many geoscientific disciplines and requires integration of a series of specialised analysis techniques. R provides the perfect environment for this challenge. The package eseis uses the foundations laid by a series of existing packages and data types tailored to solve specialised problems (e.g., signal, sp, rgdal, Rcpp, matrixStats) and thus provides access to efficiently handling large streams of seismic data (> 300 million samples per station and day). It supports standard data formats (mseed, sac), preparation techniques (deconvolution, filtering, rotation), processing methods (spectra, spectrograms, event picking, migration for localisation) and data visualisation. Thus, eseis provides a seamless approach to the entire workflow of environmental seismology and passes the output to related analysis fields with temporal, spatial and modelling focus in R.

  16. Untargeted Metabolomics Strategies—Challenges and Emerging Directions

    NASA Astrophysics Data System (ADS)

    Schrimpe-Rutledge, Alexandra C.; Codreanu, Simona G.; Sherrod, Stacy D.; McLean, John A.

    2016-12-01

    Metabolites are building blocks of cellular function. These species are involved in enzyme-catalyzed chemical reactions and are essential for cellular function. Upstream biological disruptions result in a series of metabolomic changes and, as such, the metabolome holds a wealth of information that is thought to be most predictive of phenotype. Uncovering this knowledge is a work in progress. The field of metabolomics is still maturing; the community has leveraged proteomics experience when applicable and developed a range of sample preparation and instrument methodology along with myriad data processing and analysis approaches. Research focuses have now shifted toward a fundamental understanding of the biology responsible for metabolomic changes. There are several types of metabolomics experiments including both targeted and untargeted analyses. While untargeted, hypothesis generating workflows exhibit many valuable attributes, challenges inherent to the approach remain. This Critical Insight comments on these challenges, focusing on the identification process of LC-MS-based untargeted metabolomics studies—specifically in mammalian systems. Biological interpretation of metabolomics data hinges on the ability to accurately identify metabolites. The range of confidence associated with identifications that is often overlooked is reviewed, and opportunities for advancing the metabolomics field are described.

  17. Searching for microbial protein over-expression in a complex matrix using automated high throughput MS-based proteomics tools.

    PubMed

    Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob

    2013-03-10

    In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Antibody-Coupled Magnetic Beads Can Be Reused in Immuno-MRM Assays To Reduce Cost and Extend Antibody Supply.

    PubMed

    Zhao, Lei; Whiteaker, Jeffrey R; Voytovich, Uliana J; Ivey, Richard G; Paulovich, Amanda G

    2015-10-02

    Immunoaffinity enrichment of peptides coupled to targeted, multiple reaction monitoring mass spectrometry (immuno-MRM) enables precise quantification of peptides. Affinity-purified polyclonal antibodies are routinely used as affinity reagents in immuno-MRM assays, but they are not renewable, limiting the number of experiments that can be performed. In this technical note, we describe a workflow to regenerate anti-peptide polyclonal antibodies coupled to magnetic beads for enrichments in multiplex immuno-MRM assays. A multiplexed panel of 44 antibodies (targeting 60 peptides) is used to show that peptide analytes can be effectively stripped off of antibodies using acid washing without compromising assay performance. The performance of the multiplexed panel (determined by correlation, agreement, and precision of reused assays) is reproducible (R(2) between 0.81 and 0.99) and consistent (median CVs 8-15%) for at least 10 times of washing and reuse. Application of this workflow to immuno-MRM studies greatly reduces per sample assay cost and increases the number of samples that can be interrogated with a limited supply of polyclonal antibody reagent. This allows more characterization for promising and desirable targets prior to committing funds and efforts to conversion to a renewable monoclonal antibody.

  19. Big Data Challenges in Global Seismic 'Adjoint Tomography' (Invited)

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Smith, J.

    2013-12-01

    The challenge of imaging Earth's interior on a global scale is closely linked to the challenge of handling large data sets. The related iterative workflow involves five distinct phases, namely, 1) data gathering and culling, 2) synthetic seismogram calculations, 3) pre-processing (time-series analysis and time-window selection), 4) data assimilation and adjoint calculations, 5) post-processing (pre-conditioning, regularization, model update). In order to implement this workflow on modern high-performance computing systems, a new seismic data format is being developed. The Adaptable Seismic Data Format (ASDF) is designed to replace currently used data formats with a more flexible format that allows for fast parallel I/O. The metadata is divided into abstract categories, such as "source" and "receiver", along with provenance information for complete reproducibility. The structure of ASDF is designed keeping in mind three distinct applications: earthquake seismology, seismic interferometry, and exploration seismology. Existing time-series analysis tool kits, such as SAC and ObsPy, can be easily interfaced with ASDF so that seismologists can use robust, previously developed software packages. ASDF accommodates an automated, efficient workflow for global adjoint tomography. Manually managing the large number of simulations associated with the workflow can rapidly become a burden, especially with increasing numbers of earthquakes and stations. Therefore, it is of importance to investigate the possibility of automating the entire workflow. Scientific Workflow Management Software (SWfMS) allows users to execute workflows almost routinely. SWfMS provides additional advantages. In particular, it is possible to group independent simulations in a single job to fit the available computational resources. They also give a basic level of fault resilience as the workflow can be resumed at the correct state preceding a failure. Some of the best candidates for our particular workflow are Kepler and Swift, and the latter appears to be the most serious candidate for a large-scale workflow on a single supercomputer, remaining sufficiently simple to accommodate further modifications and improvements.

  20. Structuring research methods and data with the research object model: genomics workflows as a case study.

    PubMed

    Hettne, Kristina M; Dharuri, Harish; Zhao, Jun; Wolstencroft, Katherine; Belhajjame, Khalid; Soiland-Reyes, Stian; Mina, Eleni; Thompson, Mark; Cruickshank, Don; Verdes-Montenegro, Lourdes; Garrido, Julian; de Roure, David; Corcho, Oscar; Klyne, Graham; van Schouwen, Reinout; 't Hoen, Peter A C; Bechhofer, Sean; Goble, Carole; Roos, Marco

    2014-01-01

    One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as "which particular data was input to a particular workflow to test a particular hypothesis?", and "which particular conclusions were drawn from a particular workflow?". Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well. The Research Object is available at http://www.myexperiment.org/packs/428 The Wf4Ever Research Object Model is available at http://wf4ever.github.io/ro.

  1. Comparison of manual and automated AmpliSeq™ workflows in the typing of a Somali population with the Precision ID Identity Panel.

    PubMed

    van der Heijden, Suzanne; de Oliveira, Susanne Juel; Kampmann, Marie-Louise; Børsting, Claus; Morling, Niels

    2017-11-01

    The Precision ID Identity Panel was used to type 109 Somali individuals in order to obtain allele frequencies for the Somali population. These frequencies were used to establish a Somali HID-SNP database, which will be used for the biostatistic calculations in family and immigration cases. Genotypes obtained with the Precision ID Identity Panel were found to be almost in complete concordance with genotypes obtained with the SNPforID PCR-SBE-CE assay. In seven SNP loci, silent alleles were identified, of which most were previously described in the literature. The project also set out to compare different AmpliSeq™ workflows to investigate the possibility of using automated library building in forensic genetic case work. In order to do so, the SNP typing of the Somalis was performed using three different workflows: 1) manual library building and sequencing on the Ion PGM™, 2) automated library building using the Biomek ® 3000 and sequencing on the Ion PGM™, and 3) automated library building using the Ion Chef™ and sequencing on the Ion S5™. AmpliSeq™ workflows were compared based on coverage, locus balance, noise, and heterozygote balance. Overall, the Ion Chef™/Ion S5™ workflow was found to give the best results and required least hands-on time in the laboratory. However, the Ion Chef™/Ion S5™ workflow was also the most expensive. The number of libraries that may be constructed in one Ion Chef™ library building run was limited to eight, which is too little for high throughput workflows. The Biomek ® 3000/Ion PGM™ workflow was found to perform similarly to the manual/Ion PGM™ workflow. This argues for the use of automated library building in forensic genetic case work. Automated library building decreases the workload of the laboratory staff, decreases the risk of pipetting errors, and simplifies the daily workflow in forensic genetic laboratories. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. From the desktop to the grid: scalable bioinformatics via workflow conversion.

    PubMed

    de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver

    2016-03-12

    Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.

  3. A scientific workflow framework for (13)C metabolic flux analysis.

    PubMed

    Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina

    2016-08-20

    Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  5. BPELPower—A BPEL execution engine for geospatial web services

    NASA Astrophysics Data System (ADS)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  6. Performance of an Automated Versus a Manual Whole-Body Magnetic Resonance Imaging Workflow.

    PubMed

    Stocker, Daniel; Finkenstaedt, Tim; Kuehn, Bernd; Nanz, Daniel; Klarhoefer, Markus; Guggenberger, Roman; Andreisek, Gustav; Kiefer, Berthold; Reiner, Caecilia S

    2018-04-24

    The aim of this study was to evaluate the performance of an automated workflow for whole-body magnetic resonance imaging (WB-MRI), which reduces user interaction compared with the manual WB-MRI workflow. This prospective study was approved by the local ethics committee. Twenty patients underwent WB-MRI for myopathy evaluation on a 3 T MRI scanner. Ten patients (7 women; age, 52 ± 13 years; body weight, 69.9 ± 13.3 kg; height, 173 ± 9.3 cm; body mass index, 23.2 ± 3.0) were examined with a prototypical automated WB-MRI workflow, which automatically segments the whole body, and 10 patients (6 women; age, 35.9 ± 12.4 years; body weight, 72 ± 21 kg; height, 169.2 ± 10.4 cm; body mass index, 24.9 ± 5.6) with a manual scan. Overall image quality (IQ; 5-point scale: 5, excellent; 1, poor) and coverage of the study volume were assessed by 2 readers for each sequence (coronal T2-weighted turbo inversion recovery magnitude [TIRM] and axial contrast-enhanced T1-weighted [ce-T1w] gradient dual-echo sequence). Interreader agreement was evaluated with intraclass correlation coefficients. Examination time, number of user interactions, and MR technicians' acceptance rating (1, highest; 10, lowest) was compared between both groups. Total examination time was significantly shorter for automated WB-MRI workflow versus manual WB-MRI workflow (30.0 ± 4.2 vs 41.5 ± 3.4 minutes, P < 0.0001) with significantly shorter planning time (2.5 ± 0.8 vs 14.0 ± 7.0 minutes, P < 0.0001). Planning took 8% of the total examination time with automated versus 34% with manual WB-MRI workflow (P < 0.0001). The number of user interactions with automated WB-MRI workflow was significantly lower compared with manual WB-MRI workflow (10.2 ± 4.4 vs 48.2 ± 17.2, P < 0.0001). Planning efforts were rated significantly lower by the MR technicians for the automated WB-MRI workflow than for the manual WB-MRI workflow (2.20 ± 0.92 vs 4.80 ± 2.39, respectively; P = 0.005). Overall IQ was similar between automated and manual WB-MRI workflow (TIRM: 4.00 ± 0.94 vs 3.45 ± 1.19, P = 0.264; ce-T1w: 4.20 ± 0.88 vs 4.55 ± .55, P = 0.423). Interreader agreement for overall IQ was excellent for TIRM and ce-T1w with an intraclass correlation coefficient of 0.95 (95% confidence interval, 0.86-0.98) and 0.88 (95% confidence interval, 0.70-0.95). Incomplete coverage of the thoracic compartment in the ce-T1w sequence occurred more often in the automated WB-MRI workflow (P = 0.008) for reader 2. No other significant differences in the study volume coverage were found. In conclusion, the automated WB-MRI scanner workflow showed a significant reduction of the examination time and the user interaction compared with the manual WB-MRI workflow. Image quality and the coverage of the study volume were comparable in both groups.

  7. Integrated workflows for spiking neuronal network simulations

    PubMed Central

    Antolík, Ján; Davison, Andrew P.

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902

  8. Create, run, share, publish, and reference your LC-MS, FIA-MS, GC-MS, and NMR data analysis workflows with the Workflow4Metabolomics 3.0 Galaxy online infrastructure for metabolomics.

    PubMed

    Guitton, Yann; Tremblay-Franco, Marie; Le Corguillé, Gildas; Martin, Jean-François; Pétéra, Mélanie; Roger-Mele, Pierrick; Delabrière, Alexis; Goulitquer, Sophie; Monsoor, Misharl; Duperier, Christophe; Canlet, Cécile; Servien, Rémi; Tardivel, Patrick; Caron, Christophe; Giacomoni, Franck; Thévenot, Etienne A

    2017-12-01

    Metabolomics is a key approach in modern functional genomics and systems biology. Due to the complexity of metabolomics data, the variety of experimental designs, and the multiplicity of bioinformatics tools, providing experimenters with a simple and efficient resource to conduct comprehensive and rigorous analysis of their data is of utmost importance. In 2014, we launched the Workflow4Metabolomics (W4M; http://workflow4metabolomics.org) online infrastructure for metabolomics built on the Galaxy environment, which offers user-friendly features to build and run data analysis workflows including preprocessing, statistical analysis, and annotation steps. Here we present the new W4M 3.0 release, which contains twice as many tools as the first version, and provides two features which are, to our knowledge, unique among online resources. First, data from the four major metabolomics technologies (i.e., LC-MS, FIA-MS, GC-MS, and NMR) can be analyzed on a single platform. By using three studies in human physiology, alga evolution, and animal toxicology, we demonstrate how the 40 available tools can be easily combined to address biological issues. Second, the full analysis (including the workflow, the parameter values, the input data and output results) can be referenced with a permanent digital object identifier (DOI). Publication of data analyses is of major importance for robust and reproducible science. Furthermore, the publicly shared workflows are of high-value for e-learning and training. The Workflow4Metabolomics 3.0 e-infrastructure thus not only offers a unique online environment for analysis of data from the main metabolomics technologies, but it is also the first reference repository for metabolomics workflows. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Integrated workflows for spiking neuronal network simulations.

    PubMed

    Antolík, Ján; Davison, Andrew P

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.

  10. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  11. Text mining meets workflow: linking U-Compare with Taverna

    PubMed Central

    Kano, Yoshinobu; Dobson, Paul; Nakanishi, Mio; Tsujii, Jun'ichi; Ananiadou, Sophia

    2010-01-01

    Summary: Text mining from the biomedical literature is of increasing importance, yet it is not easy for the bioinformatics community to create and run text mining workflows due to the lack of accessibility and interoperability of the text mining resources. The U-Compare system provides a wide range of bio text mining resources in a highly interoperable workflow environment where workflows can very easily be created, executed, evaluated and visualized without coding. We have linked U-Compare to Taverna, a generic workflow system, to expose text mining functionality to the bioinformatics community. Availability: http://u-compare.org/taverna.html, http://u-compare.org Contact: kano@is.s.u-tokyo.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20709690

  12. Optimizing and benchmarking de novo transcriptome sequencing: from library preparation to assembly evaluation.

    PubMed

    Hara, Yuichiro; Tatsumi, Kaori; Yoshida, Michio; Kajikawa, Eriko; Kiyonari, Hiroshi; Kuraku, Shigehiro

    2015-11-18

    RNA-seq enables gene expression profiling in selected spatiotemporal windows and yields massive sequence information with relatively low cost and time investment, even for non-model species. However, there remains a large room for optimizing its workflow, in order to take full advantage of continuously developing sequencing capacity. Transcriptome sequencing for three embryonic stages of Madagascar ground gecko (Paroedura picta) was performed with the Illumina platform. The output reads were assembled de novo for reconstructing transcript sequences. In order to evaluate the completeness of transcriptome assemblies, we prepared a reference gene set consisting of vertebrate one-to-one orthologs. To take advantage of increased read length of >150 nt, we demonstrated shortened RNA fragmentation time, which resulted in a dramatic shift of insert size distribution. To evaluate products of multiple de novo assembly runs incorporating reads with different RNA sources, read lengths, and insert sizes, we introduce a new reference gene set, core vertebrate genes (CVG), consisting of 233 genes that are shared as one-to-one orthologs by all vertebrate genomes examined (29 species)., The completeness assessment performed by the computational pipelines CEGMA and BUSCO referring to CVG, demonstrated higher accuracy and resolution than with the gene set previously established for this purpose. As a result of the assessment with CVG, we have derived the most comprehensive transcript sequence set of the Madagascar ground gecko by means of assembling individual libraries followed by clustering the assembled sequences based on their overall similarities. Our results provide several insights into optimizing de novo RNA-seq workflow, including the coordination between library insert size and read length, which manifested in improved connectivity of assemblies. The approach and assembly assessment with CVG demonstrated here would be applicable to transcriptome analysis of other species as well as whole genome analyses.

  13. Marginal discrepancy of noble metal-ceramic fixed dental prosthesis frameworks fabricated by conventional and digital technologies.

    PubMed

    Afify, Ahmed; Haney, Stephan; Verrett, Ronald; Mansueto, Michael; Cray, James; Johnson, Russell

    2018-02-01

    Studies evaluating the marginal adaptation of available computer-aided design and computer-aided manufacturing (CAD-CAM) noble alloys for metal-ceramic prostheses are lacking. The purpose of this in vitro study was to evaluate the vertical marginal adaptation of cast, milled, and direct metal laser sintered (DMLS) noble metal-ceramic 3-unit fixed partial denture (FDP) frameworks before and after fit adjustments. Two typodont teeth were prepared for metal-ceramic FDP abutments. An acrylic resin pattern of the prepared teeth was fabricated and cast in nickel-chromium (Ni-Cr) alloy. Each specimen group (cast, milled, DMLS) was composed of 12 casts made from 12 impressions (n=12). A single design for the FDP substructure was created on a laboratory scanner and used for designing the specimens in the 3 groups. Each specimen was fitted to its corresponding cast by using up to 5 adjustment cycles, and marginal discrepancies were measured on the master Ni-Cr model before and after laboratory fit adjustments. The milled and DMLS groups had smaller marginal discrepancy measurements than those of the cast group (P<.001). Significant differences were found in the number of adjustments among the groups, with the milled group requiring the minimum number of adjustments, followed by the DMLS and cast groups (F=30.643, P<.001). Metal-ceramic noble alloy frameworks fabricated by using a CAD-CAM workflow had significantly smaller marginal discrepancies compared with those with a traditional cast workflow, with the milled group demonstrating the best marginal fit among the 3 test groups. Manual refining significantly enhanced the marginal fit of all groups. All 3 groups demonstrated marginal discrepancies within the range of clinical acceptability. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  14. wft4galaxy: a workflow testing tool for galaxy.

    PubMed

    Piras, Marco Enrico; Pireddu, Luca; Zanetti, Gianluigi

    2017-12-01

    Workflow managers for scientific analysis provide a high-level programming platform facilitating standardization, automation, collaboration and access to sophisticated computing resources. The Galaxy workflow manager provides a prime example of this type of platform. As compositions of simpler tools, workflows effectively comprise specialized computer programs implementing often very complex analysis procedures. To date, no simple way to automatically test Galaxy workflows and ensure their correctness has appeared in the literature. With wft4galaxy we offer a tool to bring automated testing to Galaxy workflows, making it feasible to bring continuous integration to their development and ensuring that defects are detected promptly. wft4galaxy can be easily installed as a regular Python program or launched directly as a Docker container-the latter reducing installation effort to a minimum. Available at https://github.com/phnmnl/wft4galaxy under the Academic Free License v3.0. marcoenrico.piras@crs4.it. © The Author 2017. Published by Oxford University Press.

  15. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  16. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  17. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  18. Separating Business Logic from Medical Knowledge in Digital Clinical Workflows Using Business Process Model and Notation and Arden Syntax.

    PubMed

    de Bruin, Jeroen S; Adlassnig, Klaus-Peter; Leitich, Harald; Rappelsberger, Andrea

    2018-01-01

    Evidence-based clinical guidelines have a major positive effect on the physician's decision-making process. Computer-executable clinical guidelines allow for automated guideline marshalling during a clinical diagnostic process, thus improving the decision-making process. Implementation of a digital clinical guideline for the prevention of mother-to-child transmission of hepatitis B as a computerized workflow, thereby separating business logic from medical knowledge and decision-making. We used the Business Process Model and Notation language system Activiti for business logic and workflow modeling. Medical decision-making was performed by an Arden-Syntax-based medical rule engine, which is part of the ARDENSUITE software. We succeeded in creating an electronic clinical workflow for the prevention of mother-to-child transmission of hepatitis B, where institution-specific medical decision-making processes could be adapted without modifying the workflow business logic. Separation of business logic and medical decision-making results in more easily reusable electronic clinical workflows.

  19. CamBAfx: Workflow Design, Implementation and Application for Neuroimaging

    PubMed Central

    Ooi, Cinly; Bullmore, Edward T.; Wink, Alle-Meije; Sendur, Levent; Barnes, Anna; Achard, Sophie; Aspden, John; Abbott, Sanja; Yue, Shigang; Kitzbichler, Manfred; Meunier, David; Maxim, Voichita; Salvador, Raymond; Henty, Julian; Tait, Roger; Subramaniam, Naresh; Suckling, John

    2009-01-01

    CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user interface) optimized for data processing designed in a way familiar to consumers. The back-end uses a pipeline model to represent workflows since this is a common and useful metaphor used by designers and is easy to manipulate compared to other representations like programming scripts. As an Eclipse Rich Client Platform application, CamBAfx's pipelines and functions can be bundled with the software or downloaded post-installation. The user interface contains all the workflow facilities expected by consumers. Using the Eclipse Extension Mechanism designers are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workflow facility around neuroinformatics software without modification. CamBAfx's design, licensing and Eclipse Branding Mechanism allow it to be used as the user interface for other software, facilitating exchange of innovative computational tools between originating labs. PMID:19826470

  20. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  1. Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Ramachandran, R.; Lynnes, C.

    2009-05-01

    A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).

  2. Improving adherence to the Epic Beacon ambulatory workflow.

    PubMed

    Chackunkal, Ellen; Dhanapal Vogel, Vishnuprabha; Grycki, Meredith; Kostoff, Diana

    2017-06-01

    Computerized physician order entry has been shown to significantly improve chemotherapy safety by reducing the number of prescribing errors. Epic's Beacon Oncology Information System of computerized physician order entry and electronic medication administration was implemented in Henry Ford Health System's ambulatory oncology infusion centers on 9 November 2013. Since that time, compliance to the infusion workflow had not been assessed. The objective of this study was to optimize the current workflow and improve the compliance to this workflow in the ambulatory oncology setting. This study was a retrospective, quasi-experimental study which analyzed the composite workflow compliance rate of patient encounters from 9 to 23 November 2014. Based on this analysis, an intervention was identified and implemented in February 2015 to improve workflow compliance. The primary endpoint was to compare the composite compliance rate to the Beacon workflow before and after a pharmacy-initiated intervention. The intervention, which was education of infusion center staff, was initiated by ambulatory-based, oncology pharmacists and implemented by a multi-disciplinary team of pharmacists and nurses. The composite compliance rate was then reassessed for patient encounters from 2 to 13 March 2015 in order to analyze the effects of the determined intervention on compliance. The initial analysis in November 2014 revealed a composite compliance rate of 38%, and data analysis after the intervention revealed a statistically significant increase in the composite compliance rate to 83% ( p < 0.001). This study supports a pharmacist-initiated educational intervention can improve compliance to an ambulatory, oncology infusion workflow.

  3. Exploring Dental Providers’ Workflow in an Electronic Dental Record Environment

    PubMed Central

    Schwei, Kelsey M; Cooper, Ryan; Mahnke, Andrea N.; Ye, Zhan

    2016-01-01

    Summary Background A workflow is defined as a predefined set of work steps and partial ordering of these steps in any environment to achieve the expected outcome. Few studies have investigated the workflow of providers in a dental office. It is important to understand the interaction of dental providers with the existing technologies at point of care to assess breakdown in the workflow which could contribute to better technology designs. Objective The study objective was to assess electronic dental record (EDR) workflows using time and motion methodology in order to identify breakdowns and opportunities for process improvement. Methods A time and motion methodology was used to study the human-computer interaction and workflow of dental providers with an EDR in four dental centers at a large healthcare organization. A data collection tool was developed to capture the workflow of dental providers and staff while they interacted with an EDR during initial, planned, and emergency patient visits, and at the front desk. Qualitative and quantitative analysis was conducted on the observational data. Results Breakdowns in workflow were identified while posting charges, viewing radiographs, e-prescribing, and interacting with patient scheduler. EDR interaction time was significantly different between dentists and dental assistants (6:20 min vs. 10:57 min, p = 0.013) and between dentists and dental hygienists (6:20 min vs. 9:36 min, p = 0.003). Conclusions On average, a dentist spent far less time than dental assistants and dental hygienists in data recording within the EDR. PMID:27437058

  4. Workflow continuity--moving beyond business continuity in a multisite 24-7 healthcare organization.

    PubMed

    Kolowitz, Brian J; Lauro, Gonzalo Romero; Barkey, Charles; Black, Harry; Light, Karen; Deible, Christopher

    2012-12-01

    As hospitals move towards providing in-house 24 × 7 services, there is an increasing need for information systems to be available around the clock. This study investigates one organization's need for a workflow continuity solution that provides around the clock availability for information systems that do not provide highly available services. The organization investigated is a large multifacility healthcare organization that consists of 20 hospitals and more than 30 imaging centers. A case analysis approach was used to investigate the organization's efforts. The results show an overall reduction in downtimes where radiologists could not continue their normal workflow on the integrated Picture Archiving and Communications System (PACS) solution by 94 % from 2008 to 2011. The impact of unplanned downtimes was reduced by 72 % while the impact of planned downtimes was reduced by 99.66 % over the same period. Additionally more than 98 h of radiologist impact due to a PACS upgrade in 2008 was entirely eliminated in 2011 utilizing the system created by the workflow continuity approach. Workflow continuity differs from high availability and business continuity in its design process and available services. Workflow continuity only ensures that critical workflows are available when the production system is unavailable due to scheduled or unscheduled downtimes. Workflow continuity works in conjunction with business continuity and highly available system designs. The results of this investigation revealed that this approach can add significant value to organizations because impact on users is minimized if not eliminated entirely.

  5. 78 FR 22880 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-17

    ... between Health IT and Ambulatory Care Workflow Redesign.'' In accordance with the Paperwork Reduction Act... Understand the Relationship between Health IT and Ambulatory Care Workflow Redesign. The Agency for... Methods to Better Understand the Relationship between Health IT and Ambulatory Care Workflow Redesign...

  6. Generation of cell type-specific monoclonal antibodies for the planarian and optimization of sample processing for immunolabeling.

    PubMed

    Forsthoefel, David J; Waters, Forrest A; Newmark, Phillip A

    2014-12-21

    Efforts to elucidate the cellular and molecular mechanisms of regeneration have required the application of methods to detect specific cell types and tissues in a growing cohort of experimental animal models. For example, in the planarian Schmidtea mediterranea, substantial improvements to nucleic acid hybridization and electron microscopy protocols have facilitated the visualization of regenerative events at the cellular level. By contrast, immunological resources have been slower to emerge. Specifically, the repertoire of antibodies recognizing planarian antigens remains limited, and a more systematic approach is needed to evaluate the effects of processing steps required during sample preparation for immunolabeling. To address these issues and to facilitate studies of planarian digestive system regeneration, we conducted a monoclonal antibody (mAb) screen using phagocytic intestinal cells purified from the digestive tracts of living planarians as immunogens. This approach yielded ten antibodies that recognized intestinal epitopes, as well as markers for the central nervous system, musculature, secretory cells, and epidermis. In order to improve signal intensity and reduce non-specific background for a subset of mAbs, we evaluated the effects of fixation and other steps during sample processing. We found that fixative choice, treatments to remove mucus and bleach pigment, as well as methods for tissue permeabilization and antigen retrieval profoundly influenced labeling by individual antibodies. These experiments led to the development of a step-by-step workflow for determining optimal specimen preparation for labeling whole planarians as well as unbleached histological sections. We generated a collection of monoclonal antibodies recognizing the planarian intestine and other tissues; these antibodies will facilitate studies of planarian tissue morphogenesis. We also developed a protocol for optimizing specimen processing that will accelerate future efforts to generate planarian-specific antibodies, and to extend functional genetic studies of regeneration to post-transcriptional aspects of gene expression, such as protein localization or modification. Our efforts demonstrate the importance of systematically testing multiple approaches to species-specific idiosyncracies, such as mucus removal and pigment bleaching, and may serve as a template for the development of immunological resources in other emerging model organisms.

  7. Military Interoperable Digital Hospital Testbed (MIDHT)

    DTIC Science & Technology

    2015-12-01

    and Analyze the resulting technological impact on medication errors, pharmacists ’ productivity, nurse satisfactions/workflow and patient...medication errors, pharmacists productivity, nurse satisfactions/workflow and patient satisfaction. 1.1.1 Pharmacy Robotics Implementation...1.2 Research and analyze the resulting technological impact on medication errors, pharmacist productivity, nurse satisfaction/workflow and patient

  8. Provenance Storage, Querying, and Visualization in PBase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kianmajd, Parisa; Ludascher, Bertram; Missier, Paolo

    2015-01-01

    We present PBase, a repository for scientific workflows and their corresponding provenance information that facilitates the sharing of experiments among the scientific community. PBase is interoperable since it uses ProvONE, a standard provenance model for scientific workflows. Workflows and traces are stored in RDF, and with the support of SPARQL and the tree cover encoding, the repository provides a scalable infrastructure for querying the provenance data. Furthermore, through its user interface, it is possible to: visualize workflows and execution traces; visualize reachability relations within these traces; issue SPARQL queries; and visualize query results.

  9. Context-aware workflow management of mobile health applications.

    PubMed

    Salden, Alfons; Poortinga, Remco

    2006-01-01

    We propose a medical application management architecture that allows medical (IT) experts readily designing, developing and deploying context-aware mobile health (m-health) applications or services. In particular, we elaborate on how our application workflow management architecture enables chaining, coordinating, composing, and adapting context-sensitive medical application components such that critical Quality of Service (QoS) and Quality of Context (QoC) requirements typical for m-health applications or services can be met. This functional architectural support requires learning modules for distilling application-critical selection of attention and anticipation models. These models will help medical experts constructing and adjusting on-the-fly m-health application workflows and workflow strategies. We illustrate our context-aware workflow management paradigm for a m-health data delivery problem, in which optimal communication network configurations have to be determined.

  10. Experimental evaluation of a flexible I/O architecture for accelerating workflow engines in ultrascale environments

    DOE PAGES

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...

    2016-10-06

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  11. Prototype of Kepler Processing Workflows For Microscopy And Neuroinformatics

    PubMed Central

    Astakhov, V.; Bandrowski, A.; Gupta, A.; Kulungowski, A.W.; Grethe, J.S.; Bouwer, J.; Molina, T.; Rowley, V.; Penticoff, S.; Terada, M.; Wong, W.; Hakozaki, H.; Kwon, O.; Martone, M.E.; Ellisman, M.

    2016-01-01

    We report on progress of employing the Kepler workflow engine to prototype “end-to-end” application integration workflows that concern data coming from microscopes deployed at the National Center for Microscopy Imaging Research (NCMIR). This system is built upon the mature code base of the Cell Centered Database (CCDB) and integrated rule-oriented data system (IRODS) for distributed storage. It provides integration with external projects such as the Whole Brain Catalog (WBC) and Neuroscience Information Framework (NIF), which benefit from NCMIR data. We also report on specific workflows which spawn from main workflows and perform data fusion and orchestration of Web services specific for the NIF project. This “Brain data flow” presents a user with categorized information about sources that have information on various brain regions. PMID:28479932

  12. Workflow technology: the new frontier. How to overcome the barriers and join the future.

    PubMed

    Shefter, Susan M

    2006-01-01

    Hospitals are catching up to the business world in the introduction of technology systems that support professional practice and workflow. The field of case management is highly complex and interrelates with diverse groups in diverse locations. The last few years have seen the introduction of Workflow Technology Tools, which can improve the quality and efficiency of discharge planning by the case manager. Despite the availability of these wonderful new programs, many case managers are hesitant to adopt the new technology and workflow. For a myriad of reasons, a computer-based workflow system can seem like a brick wall. This article discusses, from a practitioner's point of view, how professionals can gain confidence and skill to get around the brick wall and join the future.

  13. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  14. Managing and Communicating Operational Workflow: Designing and Implementing an Electronic Outpatient Whiteboard.

    PubMed

    Steitz, Bryan D; Weinberg, Stuart T; Danciu, Ioana; Unertl, Kim M

    2016-01-01

    Healthcare team members in emergency department contexts have used electronic whiteboard solutions to help manage operational workflow for many years. Ambulatory clinic settings have highly complex operational workflow, but are still limited in electronic assistance to communicate and coordinate work activities. To describe and discuss the design, implementation, use, and ongoing evolution of a coordination and collaboration tool supporting ambulatory clinic operational workflow at Vanderbilt University Medical Center (VUMC). The outpatient whiteboard tool was initially designed to support healthcare work related to an electronic chemotherapy order-entry application. After a highly successful initial implementation in an oncology context, a high demand emerged across the organization for the outpatient whiteboard implementation. Over the past 10 years, developers have followed an iterative user-centered design process to evolve the tool. The electronic outpatient whiteboard system supports 194 separate whiteboards and is accessed by over 2800 distinct users on a typical day. Clinics can configure their whiteboards to support unique workflow elements. Since initial release, features such as immunization clinical decision support have been integrated into the system, based on requests from end users. The success of the electronic outpatient whiteboard demonstrates the usefulness of an operational workflow tool within the ambulatory clinic setting. Operational workflow tools can play a significant role in supporting coordination, collaboration, and teamwork in ambulatory healthcare settings.

  15. An efficient laboratory workflow for environmental risk assessment of organic chemicals.

    PubMed

    Zhu, Linyan; Santiago-Schübel, Beatrix; Xiao, Hongxia; Thiele, Björn; Zhu, Zhiliang; Qiu, Yanling; Hollert, Henner; Küppers, Stephan

    2015-07-01

    In this study, we demonstrate a fast and efficient workflow to investigate the transformation mechanism of organic chemicals and evaluate the toxicity of their transformation products (TPs) in laboratory scale. The transformation process of organic chemicals was first simulated by electrochemistry coupled online to mass spectrometry (EC-MS). The simulated reactions were scaled up in a batch EC reactor to receive larger amounts of a reaction mixture. The mixture sample was purified and concentrated by solid phase extraction (SPE) for the further ecotoxicological testing. The combined toxicity of the reaction mixture was evaluated in fish egg test (FET) (Danio rerio) compared to the parent compound. The workflow was verified with carbamazepine (CBZ). By using EC-MS seven primary TPs of CBZ were identified; the degradation mechanism was elucidated and confirmed by comparison to literature. The reaction mixture and one primary product (acridine) showed higher ecotoxicity in fish egg assay with 96 h EC50 values of 1.6 and 1.0 mg L(-1) than CBZ with the value of 60.8 mg L(-1). The results highlight the importance of transformation mechanism study and toxicological effect evaluation for organic chemicals brought into the environment since transformation of them may increase the toxicity. The developed process contributes a fast and efficient laboratory method for the risk assessment of organic chemicals and their TPs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Machine learning for fab automated diagnostics

    NASA Astrophysics Data System (ADS)

    Giollo, Manuel; Lam, Auguste; Gkorou, Dimitra; Liu, Xing Lan; van Haren, Richard

    2017-06-01

    Process optimization depends largely on field engineer's knowledge and expertise. However, this practice turns out to be less sustainable due to the fab complexity which is continuously increasing in order to support the extreme miniaturization of Integrated Circuits. On the one hand, process optimization and root cause analysis of tools is necessary for a smooth fab operation. On the other hand, the growth in number of wafer processing steps is adding a considerable new source of noise which may have a significant impact at the nanometer scale. This paper explores the ability of historical process data and Machine Learning to support field engineers in production analysis and monitoring. We implement an automated workflow in order to analyze a large volume of information, and build a predictive model of overlay variation. The proposed workflow addresses significant problems that are typical in fab production, like missing measurements, small number of samples, confounding effects due to heterogeneity of data, and subpopulation effects. We evaluate the proposed workflow on a real usecase and we show that it is able to predict overlay excursions observed in Integrated Circuits manufacturing. The chosen design focuses on linear and interpretable models of the wafer history, which highlight the process steps that are causing defective products. This is a fundamental feature for diagnostics, as it supports process engineers in the continuous improvement of the production line.

  17. Military Interoperable Digital Hospital Testbed (MIDHT)

    DTIC Science & Technology

    2010-02-01

    note build and system setup. There was significant progress in identifying workflows and processes. A sample note for a patient visit can be seen...between EMR systems did not exist when surveys were completed in early 2009. There is a significant opportunity to enhance patient care of military...interpret the provider’s handwriting . Many EHRs have included decision support into their functionality to help alert physicians to medication conflicts

  18. Core lipid, surface lipid and apolipoprotein composition analysis of lipoprotein particles as a function of particle size in one workflow integrating asymmetric flow field-flow fractionation and liquid chromatography-tandem mass spectrometry

    PubMed Central

    Jones, Jeffery I.; Gardner, Michael S.; Schieltz, David M.; Parks, Bryan A.; Toth, Christopher A.; Rees, Jon C.; Andrews, Michael L.; Carter, Kayla; Lehtikoski, Antony K.; McWilliams, Lisa G.; Williamson, Yulanda M.; Bierbaum, Kevin P.; Pirkle, James L.; Barr, John R.

    2018-01-01

    Lipoproteins are complex molecular assemblies that are key participants in the intricate cascade of extracellular lipid metabolism with important consequences in the formation of atherosclerotic lesions and the development of cardiovascular disease. Multiplexed mass spectrometry (MS) techniques have substantially improved the ability to characterize the composition of lipoproteins. However, these advanced MS techniques are limited by traditional pre-analytical fractionation techniques that compromise the structural integrity of lipoprotein particles during separation from serum or plasma. In this work, we applied a highly effective and gentle hydrodynamic size based fractionation technique, asymmetric flow field-flow fractionation (AF4), and integrated it into a comprehensive tandem mass spectrometry based workflow that was used for the measurement of apolipoproteins (apos A-I, A-II, A-IV, B, C-I, C-II, C-III and E), free cholesterol (FC), cholesterol esters (CE), triglycerides (TG), and phospholipids (PL) (phosphatidylcholine (PC), sphingomyelin (SM), phosphatidylethanolamine (PE), phosphatidylinositol (PI) and lysophosphatidylcholine (LPC)). Hydrodynamic size in each of 40 size fractions separated by AF4 was measured by dynamic light scattering. Measuring all major lipids and apolipoproteins in each size fraction and in the whole serum, using total of 0.1 ml, allowed the volumetric calculation of lipoprotein particle numbers and expression of composition in molar analyte per particle number ratios. Measurements in 110 serum samples showed substantive differences between size fractions of HDL and LDL. Lipoprotein composition within size fractions was expressed in molar ratios of analytes (A-I/A-II, C-II/C-I, C-II/C-III. E/C-III, FC/PL, SM/PL, PE/PL, and PI/PL), showing differences in sample categories with combinations of normal and high levels of Total-C and/or Total-TG. The agreement with previous studies indirectly validates the AF4-LC-MS/MS approach and demonstrates the potential of this workflow for characterization of lipoprotein composition in clinical studies using small volumes of archived frozen samples. PMID:29634782

  19. Asterism: an integrated, complete, and open-source approach for running seismologist continuous data-intensive analysis on heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.

  20. SU-F-T-251: The Quality Assurance for the Heavy Patient Load Department in the Developing Country: The Primary Experience of An Entire Workflow QA Process Management in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Wang, J; Peng, J

    Purpose: To implement an entire workflow quality assurance (QA) process in the radiotherapy department and to reduce the error rates of radiotherapy based on the entire workflow management in the developing country. Methods: The entire workflow QA process management starts from patient registration to the end of last treatment including all steps through the entire radiotherapy process. Error rate of chartcheck is used to evaluate the the entire workflow QA process. Two to three qualified senior medical physicists checked the documents before the first treatment fraction of every patient. Random check of the treatment history during treatment was also performed.more » A total of around 6000 patients treatment data before and after implementing the entire workflow QA process were compared from May, 2014 to December, 2015. Results: A systemic checklist was established. It mainly includes patient’s registration, treatment plan QA, information exporting to OIS(Oncology Information System), documents of treatment QAand QA of the treatment history. The error rate derived from the chart check decreases from 1.7% to 0.9% after our the entire workflow QA process. All checked errors before the first treatment fraction were corrected as soon as oncologist re-confirmed them and reinforce staff training was accordingly followed to prevent those errors. Conclusion: The entire workflow QA process improved the safety, quality of radiotherapy in our department and we consider that our QA experience can be applicable for the heavily-loaded radiotherapy departments in developing country.« less

  1. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Vickie E.; Borreguero, Jose M.; Bhowmik, Debsindhu

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parametersmore » which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.« less

  2. Decaf: Decoupled Dataflows for In Situ High-Performance Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreher, M.; Peterka, T.

    Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less

  3. Experiences and lessons learned from creating a generalized workflow for data publication of field campaign datasets

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S. K.; Ramachandran, R.; Deb, D.; Beaty, T.; Wright, D.

    2017-12-01

    This paper summarizes the workflow challenges of curating and publishing data produced from disparate data sources and provides a generalized workflow solution to efficiently archive data generated by researchers. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics and the Global Hydrology Resource Center (GHRC) DAAC have been collaborating on the development of a generalized workflow solution to efficiently manage the data publication process. The generalized workflow presented here are built on lessons learned from implementations of the workflow system. Data publication consists of the following steps: Accepting the data package from the data providers, ensuring the full integrity of the data files. Identifying and addressing data quality issues Assembling standardized, detailed metadata and documentation, including file level details, processing methodology, and characteristics of data files Setting up data access mechanisms Setup of the data in data tools and services for improved data dissemination and user experience Registering the dataset in online search and discovery catalogues Preserving the data location through Digital Object Identifiers (DOI) We will describe the steps taken to automate, and realize efficiencies to the above process. The goals of the workflow system are to reduce the time taken to publish a dataset, to increase the quality of documentation and metadata, and to track individual datasets through the data curation process. Utilities developed to achieve these goal will be described. We will also share metrics driven value of the workflow system and discuss the future steps towards creation of a common software framework.

  4. Structured recording of intraoperative surgical workflows

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Durstewitz, N.; Fischer, M.; Strauss, G.; Dietz, A.; Meixensberger, J.; Jannin, P.; Cleary, K.; Lemke, H. U.; Burgert, O.

    2006-03-01

    Surgical Workflows are used for the methodical and scientific analysis of surgical interventions. The approach described here is a step towards developing surgical assist systems based on Surgical Workflows and integrated control systems for the operating room of the future. This paper describes concepts and technologies for the acquisition of Surgical Workflows by monitoring surgical interventions and their presentation. Establishing systems which support the Surgical Workflow in operating rooms requires a multi-staged development process beginning with the description of these workflows. A formalized description of surgical interventions is needed to create a Surgical Workflow. This description can be used to analyze and evaluate surgical interventions in detail. We discuss the subdivision of surgical interventions into work steps regarding different levels of granularity and propose a recording scheme for the acquisition of manual surgical work steps from running interventions. To support the recording process during the intervention, we introduce a new software architecture. Core of the architecture is our Surgical Workflow editor that is intended to deal with the manifold, complex and concurrent relations during an intervention. Furthermore, a method for an automatic generation of graphs is shown which is able to display the recorded surgical work steps of the interventions. Finally we conclude with considerations about extensions of our recording scheme to close the gap to S-PACS systems. The approach was used to record 83 surgical interventions from 6 intervention types from 3 different surgical disciplines: ENT surgery, neurosurgery and interventional radiology. The interventions were recorded at the University Hospital Leipzig, Germany and at the Georgetown University Hospital, Washington, D.C., USA.

  5. Development of a Planet Tool for an interactive School Atlas as eBook

    NASA Astrophysics Data System (ADS)

    Wondrak, Stephan

    2018-05-01

    The present thesis describes the development of a planet tool for an interactive school atlas using an eBook format. Especially the technical and cartographical capabilities of the open standard ePUB 3 are evaluated. An eBook application with interactive and dynamic 2-dimensional visualizations is developed especially to show whether the re-al-world dimensions and distances in the solar system can be mapped in a cartographical correct and for students easy understandable manner. In the first part of the work, the requirements of the planet tool are evaluated in co-operation with experts. The open standards PDF and ePUB 3 are investigated with regard to the requirements for the development of the planet tool. Another chapter describes in detail all significant steps of the development process for a prototype of the planet tool. A graphic file originally created for print production is prepared and enhanced with interactive features to generate one of the eBook pages. This serves to show a potential workflow for the generation of eBook pages in a cross-media atlas production. All sample pages of the prototype show different layouts and contain the entire spectrum of interactive features and multimedia content of modern eBooks. The sample pages are presented and discussed in an own chapter. The results of the present work aim at answering the question concerning the suitability of the open standard ePUB 3 for the development of a multimedia eBook for high school education.

  6. A high-throughput Sanger strategy for human mitochondrial genome sequencing

    PubMed Central

    2013-01-01

    Background A population reference database of complete human mitochondrial genome (mtGenome) sequences is needed to enable the use of mitochondrial DNA (mtDNA) coding region data in forensic casework applications. However, the development of entire mtGenome haplotypes to forensic data quality standards is difficult and laborious. A Sanger-based amplification and sequencing strategy that is designed for automated processing, yet routinely produces high quality sequences, is needed to facilitate high-volume production of these mtGenome data sets. Results We developed a robust 8-amplicon Sanger sequencing strategy that regularly produces complete, forensic-quality mtGenome haplotypes in the first pass of data generation. The protocol works equally well on samples representing diverse mtDNA haplogroups and DNA input quantities ranging from 50 pg to 1 ng, and can be applied to specimens of varying DNA quality. The complete workflow was specifically designed for implementation on robotic instrumentation, which increases throughput and reduces both the opportunities for error inherent to manual processing and the cost of generating full mtGenome sequences. Conclusions The described strategy will assist efforts to generate complete mtGenome haplotypes which meet the highest data quality expectations for forensic genetic and other applications. Additionally, high-quality data produced using this protocol can be used to assess mtDNA data developed using newer technologies and chemistries. Further, the amplification strategy can be used to enrich for mtDNA as a first step in sample preparation for targeted next-generation sequencing. PMID:24341507

  7. Degradation of metallic materials studied by correlative tomography

    NASA Astrophysics Data System (ADS)

    Burnett, T. L.; Holroyd, N. J. H.; Lewandowski, J. J.; Ogurreck, M.; Rau, C.; Kelley, R.; Pickering, E. J.; Daly, M.; Sherry, A. H.; Pawar, S.; Slater, T. J. A.; Withers, P. J.

    2017-07-01

    There are a huge array of characterization techniques available today and increasingly powerful computing resources allowing for the effective analysis and modelling of large datasets. However, each experimental and modelling tool only spans limited time and length scales. Correlative tomography can be thought of as the extension of correlative microscopy into three dimensions connecting different techniques, each providing different types of information, or covering different time or length scales. Here the focus is on the linking of time lapse X-ray computed tomography (CT) and serial section electron tomography using the focussed ion beam (FIB)-scanning electron microscope to study the degradation of metals. Correlative tomography can provide new levels of detail by delivering a multiscale 3D picture of key regions of interest. Specifically, the Xe+ Plasma FIB is used as an enabling tool for large-volume high-resolution serial sectioning of materials, and also as a tool for preparation of microscale test samples and samples for nanoscale X-ray CT imaging. The exemplars presented illustrate general aspects relating to correlative workflows, as well as to the time-lapse characterisation of metal microstructures during various failure mechanisms, including ductile fracture of steel and the corrosion of aluminium and magnesium alloys. Correlative tomography is already providing significant insights into materials behaviour, linking together information from different instruments across different scales. Multiscale and multifaceted work flows will become increasingly routine, providing a feed into multiscale materials models as well as illuminating other areas, particularly where hierarchical structures are of interest.

  8. Interventional-Cardiovascular MR: Role of the Interventional MR Technologist

    PubMed Central

    Mazal, Jonathan R; Rogers, Toby; Schenke, William H; Faranesh, Anthony Z; Hansen, Michael; O’Brien, Kendall; Ratnayaka, Kanishka; Lederman, Robert J

    2016-01-01

    Background Interventional-cardiovascular magnetic resonance (iCMR) is a promising clinical tool for adults and children who need a comprehensive hemodynamic catheterization of the heart. Magnetic resonance (MR) imaging-guided cardiac catheterization offers radiation-free examination with increased soft tissue contrast and unconstrained imaging planes for catheter guidance. The interventional MR technologist plays an important role in the care of patients undergoing such procedures. It is therefore helpful for technologists to under-stand the unique iCMR preprocedural preparation, procedural and imaging workflows, and management of emergencies. The authors report their team’s experience from the National Institutes of Health Clinical Center and a collaborating pediatric site. PMID:26721838

  9. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    PubMed

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.

  10. Analysis of Serum Total and Free PSA Using Immunoaffinity Depletion Coupled to SRM: Correlation with Clinical Immunoassay Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tao; Hossain, Mahmud; Schepmoes, Athena A.

    2012-08-03

    Sandwich immunoassay is the standard technique used in clinical labs for quantifying protein biomarkers for disease detection, monitoring and therapeutic intervention. Albeit highly sensitive, the development of a specific immunoassay is rather time-consuming and associated with extremely high cost due to the requirement for paired immunoaffinity reagents of high specificity. Recently, mass spectrometry-based methods, specifically selected reaction monitoring mass spectrometry (SRM-MS), have been increasingly applied to measure low abundance biomarker candidates in tissue and biofluids, owing to high sensitivity and specificity, simplicity of assay configuration, and great multiplexing capability. In this study, we report for the first time the developmentmore » of immunoaffinity depletion-based workflows and SRM-MS assays that enable sensitive and accurate quantification of total and free prostate-specific antigen (PSA) in serum without the requirement for specific PSA antibodies. With stable isotope dilution and external calibration, low ng/mL level detection of both total and free PSA was consistently achieved in both PSA-spiked female serum samples and actual patient serum samples. Moreover, comparison of the results obtained when SRM PSA assays and conventional immunoassays were applied to the same samples showed very good correlation (R2 values ranging from 0.90 to 0.99) in several independent clinical serum sample sets, including a set of 33 samples assayed in a blinded test. These results demonstrate that the workflows and SRM assays developed here provide an attractive alternative for reliably measuring total and free PSA in human blood. Furthermore, simultaneous measurement of free and total PSA and many other biomarkers can be performed in a single analysis using high-resolution liquid chromatographic separation coupled with SRM-MS.« less

  11. Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows (Invited)

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Ramachandran, R.; Lynnes, C.

    2009-12-01

    A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues’ expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable “software appliance” to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish “talkoot” (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a “science story” in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using “service casts” and “interest casts” (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH’s Mining Workflow Composer and the open-source Active BPEL engine, and JPL’s SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the “sociological” problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).

  12. Extension of specification language for soundness and completeness of service workflow

    NASA Astrophysics Data System (ADS)

    Viriyasitavat, Wattana; Xu, Li Da; Bi, Zhuming; Sapsomboon, Assadaporn

    2018-05-01

    A Service Workflow is an aggregation of distributed services to fulfill specific functionalities. With ever increasing available services, the methodologies for the selections of the services against the given requirements become main research subjects in multiple disciplines. A few of researchers have contributed to the formal specification languages and the methods for model checking; however, existing methods have the difficulties to tackle with the complexity of workflow compositions. In this paper, we propose to formalize the specification language to reduce the complexity of the workflow composition. To this end, we extend a specification language with the consideration of formal logic, so that some effective theorems can be derived for the verification of syntax, semantics, and inference rules in the workflow composition. The logic-based approach automates compliance checking effectively. The Service Workflow Specification (SWSpec) has been extended and formulated, and the soundness, completeness, and consistency of SWSpec applications have been verified; note that a logic-based SWSpec is mandatory for the development of model checking. The application of the proposed SWSpec has been demonstrated by the examples with the addressed soundness, completeness, and consistency.

  13. Development of a novel imaging informatics-based system with an intelligent workflow engine (IWEIS) to support imaging-based clinical trials

    PubMed Central

    Wang, Ximing; Liu, Brent J; Martinez, Clarisa; Zhang, Xuejun; Winstein, Carolee J

    2015-01-01

    Imaging based clinical trials can benefit from a solution to efficiently collect, analyze, and distribute multimedia data at various stages within the workflow. Currently, the data management needs of these trials are typically addressed with custom-built systems. However, software development of the custom- built systems for versatile workflows can be resource-consuming. To address these challenges, we present a system with a workflow engine for imaging based clinical trials. The system enables a project coordinator to build a data collection and management system specifically related to study protocol workflow without programming. Web Access to DICOM Objects (WADO) module with novel features is integrated to further facilitate imaging related study. The system was initially evaluated by an imaging based rehabilitation clinical trial. The evaluation shows that the cost of the development of system can be much reduced compared to the custom-built system. By providing a solution to customize a system and automate the workflow, the system will save on development time and reduce errors especially for imaging clinical trials. PMID:25870169

  14. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  15. Optimizing high performance computing workflow for protein functional annotation

    PubMed Central

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-01-01

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  16. An 18S rRNA Workflow for Characterizing Protists in Sewage, with a Focus on Zoonotic Trichomonads.

    PubMed

    Maritz, Julia M; Rogers, Krysta H; Rock, Tara M; Liu, Nicole; Joseph, Susan; Land, Kirkwood M; Carlton, Jane M

    2017-11-01

    Microbial eukaryotes (protists) are important components of terrestrial and aquatic environments, as well as animal and human microbiomes. Their relationships with metazoa range from mutualistic to parasitic and zoonotic (i.e., transmissible between humans and animals). Despite their ecological importance, our knowledge of protists in urban environments lags behind that of bacteria, largely due to a lack of experimentally validated high-throughput protocols that produce accurate estimates of protist diversity while minimizing non-protist DNA representation. We optimized protocols for detecting zoonotic protists in raw sewage samples, with a focus on trichomonad taxa. First, we investigated the utility of two commonly used variable regions of the 18S rRNA marker gene, V4 and V9, by amplifying and Sanger sequencing 23 different eukaryotic species, including 16 protist species such as Cryptosporidium parvum, Giardia intestinalis, Toxoplasma gondii, and species of trichomonad. Next, we optimized wet-lab methods for sample processing and Illumina sequencing of both regions from raw sewage collected from a private apartment building in New York City. Our results show that both regions are effective at identifying several zoonotic protists that may be present in sewage. A combination of small extractions (1 mL volumes) performed on the same day as sample collection, and the incorporation of a vertebrate blocking primer, is ideal to detect protist taxa of interest and combat the effects of metazoan DNA. We expect that the robust, standardized methods presented in our workflow will be applicable to investigations of protists in other environmental samples, and will help facilitate large-scale investigations of protistan diversity.

  17. Barcoding Sponges: An Overview Based on Comprehensive Sampling

    PubMed Central

    Vargas, Sergio; Schuster, Astrid; Sacher, Katharina; Büttner, Gabrielle; Schätzle, Simone; Läuchli, Benjamin; Hall, Kathryn; Hooper, John N. A.; Erpenbeck, Dirk; Wörheide, Gert

    2012-01-01

    Background Phylum Porifera includes ∼8,500 valid species distributed world-wide in aquatic ecosystems ranging from ephemeral fresh-water bodies to coastal environments and the deep-sea. The taxonomy and systematics of sponges is complicated, and morphological identification can be both time consuming and erroneous due to phenotypic convergence and secondary losses, etc. DNA barcoding can provide sponge biologists with a simple and rapid method for the identification of samples of unknown taxonomic membership. The Sponge Barcoding Project (www.spongebarcoding.org), the first initiative to barcode a non-bilaterian metazoan phylum, aims to provide a comprehensive DNA barcode database for Phylum Porifera. Methodology/Principal Findings ∼7,400 sponge specimens have been extracted, and amplification of the standard COI barcoding fragment has been attempted for approximately 3,300 museum samples with ∼25% mean amplification success. Based on this comprehensive sampling, we present the first report on the workflow and progress of the sponge barcoding project, and discuss some common pitfalls inherent to the barcoding of sponges. Conclusion A DNA-barcoding workflow capable of processing potentially large sponge collections has been developed and is routinely used for the Sponge Barcoding Project with success. Sponge specific problems such as the frequent co-amplification of non-target organisms have been detected and potential solutions are currently under development. The initial success of this innovative project have already demonstrated considerable refinement of sponge systematics, evaluating morphometric character importance, geographic phenotypic variability, and the utility of the standard barcoding fragment for Porifera (despite its conserved evolution within this basal metazoan phylum). PMID:22802937

  18. Evaluating multiplexed next-generation sequencing as a method in palynology for mixed pollen samples.

    PubMed

    Keller, A; Danner, N; Grimmer, G; Ankenbrand, M; von der Ohe, K; von der Ohe, W; Rost, S; Härtel, S; Steffan-Dewenter, I

    2015-03-01

    The identification of pollen plays an important role in ecology, palaeo-climatology, honey quality control and other areas. Currently, expert knowledge and reference collections are essential to identify pollen origin through light microscopy. Pollen identification through molecular sequencing and DNA barcoding has been proposed as an alternative approach, but the assessment of mixed pollen samples originating from multiple plant species is still a tedious and error-prone task. Next-generation sequencing has been proposed to avoid this hindrance. In this study we assessed mixed pollen probes through next-generation sequencing of amplicons from the highly variable, species-specific internal transcribed spacer 2 region of nuclear ribosomal DNA. Further, we developed a bioinformatic workflow to analyse these high-throughput data with a newly created reference database. To evaluate the feasibility, we compared results from classical identification based on light microscopy from the same samples with our sequencing results. We assessed in total 16 mixed pollen samples, 14 originated from honeybee colonies and two from solitary bee nests. The sequencing technique resulted in higher taxon richness (deeper assignments and more identified taxa) compared to light microscopy. Abundance estimations from sequencing data were significantly correlated with counted abundances through light microscopy. Simulation analyses of taxon specificity and sensitivity indicate that 96% of taxa present in the database are correctly identifiable at the genus level and 70% at the species level. Next-generation sequencing thus presents a useful and efficient workflow to identify pollen at the genus and species level without requiring specialised palynological expert knowledge. © 2014 German Botanical Society and The Royal Botanical Society of the Netherlands.

  19. COSMOS: Python library for massively parallel workflows

    PubMed Central

    Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.

    2014-01-01

    Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428

  20. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.

    PubMed

    Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.

  1. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid

    PubMed Central

    Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617

  2. Using Kepler for Tool Integration in Microarray Analysis Workflows.

    PubMed

    Gan, Zhuohui; Stowe, Jennifer C; Altintas, Ilkay; McCulloch, Andrew D; Zambon, Alexander C

    Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.

  3. COSMOS: Python library for massively parallel workflows.

    PubMed

    Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J

    2014-10-15

    Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  4. The impact of computerized provider order entry systems on inpatient clinical workflow: a literature review.

    PubMed

    Niazkhani, Zahra; Pirnejad, Habibollah; Berg, Marc; Aarts, Jos

    2009-01-01

    Previous studies have shown the importance of workflow issues in the implementation of CPOE systems and patient safety practices. To understand the impact of CPOE on clinical workflow, we developed a conceptual framework and conducted a literature search for CPOE evaluations between 1990 and June 2007. Fifty-one publications were identified that disclosed mixed effects of CPOE systems. Among the frequently reported workflow advantages were the legible orders, remote accessibility of the systems, and the shorter order turnaround times. Among the frequently reported disadvantages were the time-consuming and problematic user-system interactions, and the enforcement of a predefined relationship between clinical tasks and between providers. Regarding the diversity of findings in the literature, we conclude that more multi-method research is needed to explore CPOE's multidimensional and collective impact on especially collaborative workflow.

  5. Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.

    PubMed

    Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir

    2014-01-01

    Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Scientific workflows as productivity tools for drug discovery.

    PubMed

    Shon, John; Ohkawa, Hitomi; Hammer, Juergen

    2008-05-01

    Large pharmaceutical companies annually invest tens to hundreds of millions of US dollars in research informatics to support their early drug discovery processes. Traditionally, most of these investments are designed to increase the efficiency of drug discovery. The introduction of do-it-yourself scientific workflow platforms has enabled research informatics organizations to shift their efforts toward scientific innovation, ultimately resulting in a possible increase in return on their investments. Unlike the handling of most scientific data and application integration approaches, researchers apply scientific workflows to in silico experimentation and exploration, leading to scientific discoveries that lie beyond automation and integration. This review highlights some key requirements for scientific workflow environments in the pharmaceutical industry that are necessary for increasing research productivity. Examples of the application of scientific workflows in research and a summary of recent platform advances are also provided.

  7. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  8. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  9. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  10. Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    NASA Technical Reports Server (NTRS)

    Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan

    2010-01-01

    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.

  11. Flexible Workflow Software enables the Management of an Increased Volume and Heterogeneity of Sensors, and evolves with the Expansion of Complex Ocean Observatory Infrastructures.

    NASA Astrophysics Data System (ADS)

    Tomlin, M. C.; Jenkyns, R.

    2015-12-01

    Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.

  12. Rethinking Clinical Workflow.

    PubMed

    Schlesinger, Joseph J; Burdick, Kendall; Baum, Sarah; Bellomy, Melissa; Mueller, Dorothee; MacDonald, Alistair; Chern, Alex; Chrouser, Kristin; Burger, Christie

    2018-03-01

    The concept of clinical workflow borrows from management and leadership principles outside of medicine. The only way to rethink clinical workflow is to understand the neuroscience principles that underlie attention and vigilance. With any implementation to improve practice, there are human factors that can promote or impede progress. Modulating the environment and working as a team to take care of patients is paramount. Clinicians must continually rethink clinical workflow, evaluate progress, and understand that other industries have something to offer. Then, novel approaches can be implemented to take the best care of patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A cognitive task analysis of a visual analytic workflow: Exploring molecular interaction networks in systems biology.

    PubMed

    Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias

    2011-03-21

    Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features” consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.

  14. Managing and Communicating Operational Workflow

    PubMed Central

    Weinberg, Stuart T.; Danciu, Ioana; Unertl, Kim M.

    2016-01-01

    Summary Background Healthcare team members in emergency department contexts have used electronic whiteboard solutions to help manage operational workflow for many years. Ambulatory clinic settings have highly complex operational workflow, but are still limited in electronic assistance to communicate and coordinate work activities. Objective To describe and discuss the design, implementation, use, and ongoing evolution of a coordination and collaboration tool supporting ambulatory clinic operational workflow at Vanderbilt University Medical Center (VUMC). Methods The outpatient whiteboard tool was initially designed to support healthcare work related to an electronic chemotherapy order-entry application. After a highly successful initial implementation in an oncology context, a high demand emerged across the organization for the outpatient whiteboard implementation. Over the past 10 years, developers have followed an iterative user-centered design process to evolve the tool. Results The electronic outpatient whiteboard system supports 194 separate whiteboards and is accessed by over 2800 distinct users on a typical day. Clinics can configure their whiteboards to support unique workflow elements. Since initial release, features such as immunization clinical decision support have been integrated into the system, based on requests from end users. Conclusions The success of the electronic outpatient whiteboard demonstrates the usefulness of an operational workflow tool within the ambulatory clinic setting. Operational workflow tools can play a significant role in supporting coordination, collaboration, and teamwork in ambulatory healthcare settings. PMID:27081407

  15. High-throughput 96-well solvent mediated sonic blending synthesis and on-plate solid/solution stability characterization of pharmaceutical cocrystals.

    PubMed

    Luu, Van; Jona, Janan; Stanton, Mary K; Peterson, Matthew L; Morrison, Henry G; Nagapudi, Karthik; Tan, Helming

    2013-01-30

    A 96-well high-throughput cocrystal screening workflow has been developed consisting of solvent-mediated sonic blending synthesis and on-plate solid/solution stability characterization by XRPD. A strategy of cocrystallization screening in selected blend solvents including water mixtures is proposed to not only manipulate solubility of the cocrystal components but also differentiate physical stability of the cocrystal products. Caffeine-oxalic acid and theophylline-oxalic acid cocrystals were prepared and evaluated in relation to saturation levels of the cocrystal components and stability of the cocrystal products in anhydrous and hydrous solvents. AMG 517 was screened with a number of coformers, and solid/solution stability of the resulting cocrystals on the 96-well plate was investigated. A stability trend was observed and confirmed that cocrystals comprised of lower aqueous solubility coformers tended to be more stable in water. Furthermore, cocrystals which could be isolated under hydrous solvent blending condition exhibited superior physical stability to those which could only be obtained under anhydrous condition. This integrated HTS workflow provides an efficient route in an API-sparing approach to screen and identify cocrystal candidates with proper solubility and solid/solution stability properties. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Quality Controlling CMIP datasets at GFDL

    NASA Astrophysics Data System (ADS)

    Horowitz, L. W.; Radhakrishnan, A.; Balaji, V.; Adcroft, A.; Krasting, J. P.; Nikonov, S.; Mason, E. E.; Schweitzer, R.; Nadeau, D.

    2017-12-01

    As GFDL makes the switch from model development to production in light of the Climate Model Intercomparison Project (CMIP), GFDL's efforts are shifted to testing and more importantly establishing guidelines and protocols for Quality Controlling and semi-automated data publishing. Every CMIP cycle introduces key challenges and the upcoming CMIP6 is no exception. The new CMIP experimental design comprises of multiple MIPs facilitating research in different focus areas. This paradigm has implications not only for the groups that develop the models and conduct the runs, but also for the groups that monitor, analyze and quality control the datasets before data publishing, before their knowledge makes its way into reports like the IPCC (Intergovernmental Panel on Climate Change) Assessment Reports. In this talk, we discuss some of the paths taken at GFDL to quality control the CMIP-ready datasets including: Jupyter notebooks, PrePARE, LAMP (Linux, Apache, MySQL, PHP/Python/Perl): technology-driven tracker system to monitor the status of experiments qualitatively and quantitatively, provide additional metadata and analysis services along with some in-built controlled-vocabulary validations in the workflow. In addition to this, we also discuss the integration of community-based model evaluation software (ESMValTool, PCMDI Metrics Package, and ILAMB) as part of our CMIP6 workflow.

  17. An Automated Sample Preparation Instrument to Accelerate Positive Blood Cultures Microbial Identification by MALDI-TOF Mass Spectrometry (Vitek®MS).

    PubMed

    Broyer, Patrick; Perrot, Nadine; Rostaing, Hervé; Blaze, Jérome; Pinston, Frederic; Gervasi, Gaspard; Charles, Marie-Hélène; Dachaud, Fabien; Dachaud, Jacques; Moulin, Frederic; Cordier, Sylvain; Dauwalder, Olivier; Meugnier, Hélène; Vandenesch, Francois

    2018-01-01

    Sepsis is the leading cause of death among patients in intensive care units (ICUs) requiring an early diagnosis to introduce efficient therapeutic intervention. Rapid identification (ID) of a causative pathogen is key to guide directed antimicrobial selection and was recently shown to reduce hospitalization length in ICUs. Direct processing of positive blood cultures by MALDI-TOF MS technology is one of the several currently available tools used to generate rapid microbial ID. However, all recently published protocols are still manual and time consuming, requiring dedicated technician availability and specific strategies for batch processing. We present here a new prototype instrument for automated preparation of Vitek ® MS slides directly from positive blood culture broth based on an "all-in-one" extraction strip. This bench top instrument was evaluated on 111 and 22 organisms processed using artificially inoculated blood culture bottles in the BacT/ALERT ® 3D (SA/SN blood culture bottles) or the BacT/ALERT Virtuo TM system (FA/FN Plus bottles), respectively. Overall, this new preparation station provided reliable and accurate Vitek MS species-level identification of 87% (Gram-negative bacteria = 85%, Gram-positive bacteria = 88%, and yeast = 100%) when used with BacT/ALERT ® 3D and of 84% (Gram-negative bacteria = 86%, Gram-positive bacteria = 86%, and yeast = 75%) with Virtuo ® instruments, respectively. The prototype was then evaluated in a clinical microbiology laboratory on 102 clinical blood culture bottles and compared to routine laboratory ID procedures. Overall, the correlation of ID on monomicrobial bottles was 83% (Gram-negative bacteria = 89%, Gram-positive bacteria = 79%, and yeast = 78%), demonstrating roughly equivalent performance between manual and automatized extraction methods. This prototype instrument exhibited a high level of performance regardless of bottle type or BacT/ALERT system. Furthermore, blood culture workflow could potentially be improved by converting direct ID of positive blood cultures from a batch-based to real-time and "on-demand" process.

  18. Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores

    NASA Astrophysics Data System (ADS)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.

  19. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Barriers to effective, safe communication and workflow between nurses and non-consultant hospital doctors during out-of-hours.

    PubMed

    Brady, Anne-Marie; Byrne, Gobnait; Quirke, Mary Brigid; Lynch, Aine; Ennis, Shauna; Bhangu, Jaspreet; Prendergast, Meabh

    2017-11-01

    This study aimed to evaluate the nature and type of communication and workflow arrangements between nurses and doctors out-of-hours (OOH). Effective communication and workflow arrangements between nurses and doctors are essential to minimize risk in hospital settings, particularly in the out-of-hour's period. Timely patient flow is a priority for all healthcare organizations and the quality of communication and workflow arrangements influences patient safety. Qualitative descriptive design and data collection methods included focus groups and individual interviews. A 500 bed tertiary referral acute hospital in Ireland. Junior and senior Non-Consultant Hospital Doctors, staff nurses and nurse managers. Both nurses and doctors acknowledged the importance of good interdisciplinary communication and collaborative working, in sustaining effective workflow and enabling a supportive working environment and patient safety. Indeed, issues of safety and missed care OOH were found to be primarily due to difficulties of communication and workflow. Medical workflow OOH is often dependent on cues and communication to/from nursing. However, communication systems and, in particular the bleep system, considered central to the process of communication between doctors and nurses OOH, can contribute to workflow challenges and increased staff stress. It was reported as commonplace for routine work, that should be completed during normal hours, to fall into OOH when resources were most limited, further compounding risk to patient safety. Enhancement of communication strategies between nurses and doctors has the potential to remove barriers to effective decision-making and patient flow. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

Top