Sample records for workflow combining ief

  1. Combined use of Kappa Free Light Chain Index and Isoelectrofocusing of Cerebro-Spinal Fluid in Diagnosing Multiple Sclerosis: Performances and Costs.

    PubMed

    Crespi, Ilaria; Sulas, Maria Giovanna; Mora, Riccardo; Naldi, Paola; Vecchio, Domizia; Comi, Cristoforo; Cantello, Roberto; Bellomo, Giorgio

    2017-03-01

    Isoelectrofocusing (IEF) to detect oligoclonal bands (OBCs) in cerebrospinal fluid (CSF) is the gold standard approach for evaluating intrathecal immunoglobulin synthesis in multiple sclerosis (MS) but the kappa free light chain index (KFLCi) is emerging as an alternative marker, and the combined/sequential uses of IEF and KFLCi have never been challenged. CSF and serum albumin, IgG, kFLC and lFLC were measured by nephelometry; albumin, IgG and kFLC quotients as well as Link and kFLC indexes were calculated; OCBs were evaluated by immunofixation. A total of 150 consecutive patients: 48 with MS, 32 with other neurological inflammatory diseases (NID), 62 with neurological non-inflammatory diseases (NNID), and 8 without any detectable neurological disease (NND) were investigated. Both IEF and KFLCi showed a similar accuracy as diagnostic tests for multiple sclerosis. The high sensitivity and specificity associated with the lower cost of KFLCi suggested to use this test first, followed by IEF as a confirmative procedure. The sequential use of IEF and KFLCi showed high diagnostic efficiency with cost reduction of 43 and 21%, if compared to the contemporary use of both tests, or the unique use of IEF in all patients. The "sequential testing" using KFLCi followed by IEF in MS represents an optimal procedure with accurate performance and lower costs.

  2. Site-specific protein immobilization in a microfluidic chip channel via an IEF-gelation process.

    PubMed

    Shi, Mianhong; Peng, Youyuan; Yu, Shaoning; Liu, Baohong; Kong, Jilie

    2007-05-01

    A novel strategy for site-specific protein immobilization via combining chip IEF with low-temperature sol-gel technology, called IEF-GEL here, in the channel of a modified poly(methyl methacrylate) (PMMA) microfluidic chip is proposed in this work. The IEF-GEL process involves firstly IEF for homogeneously dissolved protein in PBS containing alumina sol and carrier ampholyte with prearranged pH gradient, and then gelation locally for protein encapsulation. The process and feasibility of proposed IEF-GEL were investigated by EOF measurements, fluorescence microscopic photography, Raman spectrum and further demonstrated by glucose oxidase (GOx) reactors integrated with end-column electrochemical detection. Site-controllable immobilization of protein was realized in a 30 mm long microfluidic chip channel by the strategy to create a approximately 1.7 mm concentrated FITC-BSA band, which leads to great improvement of the elute peak shape, accomplished with remarkably increased sensitivity, approximately 20 times higher than that without IEF-GEL treatment to GOx reactors. The kinetic response of GOx after IEF-GEL treatment was also investigated. The proposed system holds the advantages of IEF and low-temperature sol-gel technologies, i.e. concentrating the protein to be focused and retaining the biological activity for the gel-embedded protein, thus realizes site-specific immobilization of low-concentration protein at nL volume level.

  3. Novel fluorescent probe for highly sensitive bioassay using sequential enzyme-linked immunosorbent assay-capillary isoelectric focusing (ELISA-cIEF).

    PubMed

    Henares, Terence G; Uenoyama, Yuta; Nogawa, Yuto; Ikegami, Ken; Citterio, Daniel; Suzuki, Koji; Funano, Shun-ichi; Sueyoshi, Kenji; Endo, Tatsuro; Hisamoto, Hideaki

    2013-06-07

    This paper presents a novel rhodamine diphosphate molecule that allows highly sensitive detection of proteins by employing sequential enzyme-linked immunosorbent assay and capillary isoelectric focusing (ELISA-cIEF). Seven-fold improvement in the immunoassay sensitivity and a 1-2 order of magnitude lower detection limit has been demonstrated by taking advantage of the combination of the enzyme-based signal amplification of ELISA and the concentration of enzyme reaction products by cIEF.

  4. Isoelectric focusing of small non-covalent metal species from plants.

    PubMed

    Köster, Jessica; Hayen, Heiko; von Wirén, Nicolaus; Weber, Günther

    2011-03-01

    IEF is known as a powerful electrophoretic separation technique for amphoteric molecules, in particular for proteins. The objective of the present work is to prove the suitability of IEF also for the separation of small, non-covalent metal species. Investigations are performed with copper-glutathione complexes, with the synthetic ligand ethylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid (EDDHA) and respective metal complexes (Fe, Ga, Al, Ni, Zn), and with the phytosiderophore 2'-deoxymugineic acid (DMA) and its ferric complex. It is shown that ethylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid and DMA species are stable during preparative scale IEF, whereas copper-glutathione dissociates considerably. It is also shown that preparative scale IEF can be applied successfully to isolate ferric DMA from real plant samples, and that multidimensional separations are possible by combining preparative scale IEF with subsequent HPLC-MS analysis. Focusing of free ligands and respective metal complexes with di- and trivalent metals results in different pIs, but CIEF is usually needed for a reliable estimation of pI values. Limitations of the proposed methods (preparative IEF and CIEF) and consequences of the results with respect to metal speciation in plants are discussed. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Loading and release of amine drugs by ion-exchange fibers: role of amine type.

    PubMed

    Gao, Yanan; Liu, Hongzhuo; Yuan, Jing; Yang, Yang; Che, Xin; Hou, Yanlong; Li, Sanming

    2014-04-01

    With more production and application of ion-exchange fibers (IEFs), it becomes necessary to understand the interaction between IEFs and amine compounds, an important group of organic drugs and structural components of large organic molecules in biological systems. However, so far few experimental studies have been conducted to systematically investigate the exchanging mechanism of amine compounds to IEFs. Therefore, 15 amine drugs were selected to investigate the effect of amine type on the loading and release of them from the related IEFs. Loading affinity of these drugs by IEFs decreased in the order of secondary, tertiary, and primary. The following items: basicity, aromaticity, molar volume, rotatability, and so on, were emphatically discussed to address the underlying mechanism of drug loading and releasing extent and rate of IEFs. It was evident that strong alkaline drugs strengthened the ionic bond between the amine groups and IEFs, and thus the loading affinity. These results will advance the understanding of the exchanging behavior of IEFs in the drug delivery system. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  6. Impact of electrical conductivity on acid hydrolysis of guar gum under induced electric field.

    PubMed

    Li, Dandan; Zhang, Yao; Yang, Na; Jin, Zhengyu; Xu, Xueming

    2018-09-01

    This study aimed to improve induced electric field (IEF)-assisted hydrolysis of polysaccharide by controlling electrical conductivity. As the conductivity of reaction medium was increased, the energy efficiency of IEF was increased because of deceased impedance, as well as enhanced output voltage and temperature, thus the hydrolysis of guar gum (GG) was accelerated under IEF. Changes in weight-average molecular weight (Mw) suggested that IEF-assisted hydrolysis of GG could be described by the first-order kinetics 1/Mw ∝ kt, with the rate constant (k), varying directly with the medium conductivity. Although IEF-assisted hydrolysis largely disrupted the morphological structure of GG, it had no impact on the chemical structure. In comparison to native GG, the steady shear viscosity of hydrolyzed GG dramatically declined while the thermal stability slightly decreased. This study extended the knowledge of electrical conductivity upon IEF-assisted acid hydrolysis of GG and might contribute to a better utilization of IEF for polysaccharide modification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Microfluidic Isoelectric Focusing of Amyloid Beta Peptides Followed by Micropillar-Matrix-Assisted Laser Desorption Ionization-Mass Spectrometry.

    PubMed

    Mikkonen, Saara; Jacksén, Johan; Roeraade, Johan; Thormann, Wolfgang; Emmer, Åsa

    2016-10-18

    A novel method for preconcentration and purification of the Alzheimer's disease related amyloid beta (Aβ) peptides by isoelectric focusing (IEF) in 75 nL microchannels combined with their analysis by micropillar-matrix-assisted laser desorption ionization-time-of-flight-mass spectrometry (MALDI-TOF-MS) is presented. A semiopen chip-based setup, consisting of open microchannels covered by a lid of a liquid fluorocarbon, was used. IEF was performed in a mixture of four small and chemically well-defined amphoteric carriers, glutamic acid, aspartyl-histidine (Asp-His), cycloserine (cSer), and arginine, which provided a stepwise pH gradient tailored for focusing of the C-terminal Aβ peptides with a pI of 5.3 in the boundary between cSer and Asp-His. Information about the focusing dynamics and location of the foci of Aβ peptides and other compounds was obtained using computer simulation and by performing MALDI-MS analysis directly from the open microchannel. With the established configuration, detection was performed by direct sampling of a nanoliter volume containing the focused Aβ peptides from the microchannel, followed by deposition of this volume onto a chip with micropillar MALDI targets. In addition to purification, IEF preconcentration provides at least a 10-fold increase of the MALDI-MS-signal. After immunoprecipitation and concentration of the eluate in the microchannel, IEF-micropillar-MALDI-MS is demonstrated to be a suitable platform for detection of Aβ peptides in human cerebrospinal fluid as well as in blood plasma.

  8. Revision of empirical electric field modeling in the inner magnetosphere using Cluster data

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Torbert, R. B.; Spence, H. E.; Khotyaintsev, Yu. V.; Lindqvist, P.-A.

    2013-07-01

    Using Cluster data from the Electron Drift (EDI) and the Electric Field and Wave (EFW) instruments, we revise our empirically-based, inner-magnetospheric electric field (UNH-IMEF) model at 22.662 mV/m; Kp<1, 1≤Kp<2, 2≤Kp<3, 3≤Kp<4, 4≤Kp<5, and Kp≥4+. Patterns consist of one set of data and processing for smaller activities, and another for higher activities. As activity increases, the skewed potential contour related to the partial ring current appears on the nightside. With the revised analysis, we find that the skewed potential contours get clearer and potential contours get denser on the nightside and morningside. Since the fluctuating components are not negligible, standard deviations from the modeled values are included in the model. In this study, we perform validation of the derived model more extensively. We find experimentally that the skewed contours are located close to the last closed equipotential, consistent with previous theories. This gives physical context to our model and serves as one validation effort. As another validation effort, the derived results are compared with other models/measurements. From these comparisons, we conclude that our model has some clear advantages over the others.

  9. Intrinsic Emotional Fluctuation in Daily Negative Affect across Adulthood.

    PubMed

    Liu, Yin; Bangerter, Lauren R; Rovine, Michael J; Zarit, Steven H; Almeida, David M

    2017-12-15

    The study explored daily negative affect (NA) fluctuation, its associations with age, and its developmental characteristics. The sample (n = 790) was drawn from the Midlife Development in the United States; participants completed two 8-day daily diaries 10 years apart. Multilevel models were estimated within each diary component, where two single daily NA (depression and nervousness) and daily NA diversity were predicted separately by daily stressor exposures, physical health symptoms, age, gender, education, and neuroticism. The variances of within-person residual were output for single NA and NA diversity as intrinsic emotion fluctuation (IEF) within each diary component (i.e., controlled for within- and between-person contextual factors). Then multilevel growth models were fit to explore the developmental characteristics of day-to-day IEF across 10 years. At the daily level, older age was associated with less IEF in depression and nervousness. Over time, IEF in depression decreased. Additionally, IEF in NA diversity increased for older participants longitudinally. IEF represents a new conceptualization of midlife individuals' daily emotional ups and downs, specifically, the intrinsic within-person volatility of emotions. The magnitude of IEF and its longitudinal dynamics may have implications for health and well-being of middle-aged adults. © The Author(s) 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Laser micromachined isoelectric focusing devices on polymer substrate for electrospray mass spectrometry

    NASA Astrophysics Data System (ADS)

    Lin, Yuehe; Wen, Jenny; Fan, Xiang; Matson, Dean W.; Smith, Richard D.

    1999-08-01

    A microfabricated device for isoelectric focusing (IEF) incorporating an optimized electrospray ionization (ESI) tip was constructed on polycarbonate plates using a laser micromachining technique. The separation channels on an IEF chip were 16 cm long, 50 micrometers wide and 30 micrometers deep. Electrical potentials used for IEF focusing and electrospray were applied through platinum electrodes placed in the buffer reservoirs, and which were isolated from the separation channel by molecular porous membranes. On-line ESI produced directly from a sharp `tip' on the microchip was evaluated. The results indicate that this design can produce a stable electrospray that is further improved and made more flexible with the assistance of sheath gas and sheath liquid. Error analysis of the spectral data shows that the standard deviation in signal intensity for an analyte peak was less than approximately 5% over 3 hours. The production of stable electrosprays directly from microchip IEF devices represents a step towards easily- fabricated microanalytical devices. IEF separations of protein mixtures were demonstrated for uncoated polycarbonate microchips. On-line IEF/ESI-MS was demonstrated using the microfabricated chip with an ion-trap ESI mass spectrometer for characterization of protein mixtures.

  11. Rethinking environmental stress from the perspective of an integrated environmental footprint: Application in the Beijing industry sector.

    PubMed

    Hu, Jingru; Huang, Kai; Ridoutt, Bradley G; Yu, Yajuan; Wei, Jing

    2018-05-13

    Individual footprint indicators are limited in that they usually only address one specific environmental aspect. For this reason, assessments involving multiple footprint indicators are preferred. However, the interpretation of a profile of footprint indicators can be difficult as the relative importance of the different footprint results is not readily discerned by decision-makers. In this study, a time series (1997-2012) of carbon, water and land footprints was calculated for industry sectors in the Beijing region using input-output analysis. An integrated environmental footprint (IEF) was subsequently developed using normalization and entropy weighting. The results show that steep increases in environmental footprint have accompanied Beijing's rapid economic development. In 2012, the Primary Industry had the largest IEF (8.32); however, the Secondary Industry had the greatest increase over the study period, from 0.19 to 6.37. For the Primary Industry, the greatest contribution to the IEF came from the land footprint. For the Secondary and Tertiary Industries, the water footprint was most important. Using the IEF, industry sectors with low resource utilization efficiency and high greenhouse gas emissions intensity can be identified. As such, the IEF can help to inform about industry sectors which should be given priority for modernization as well as the particular footprints that require priority attention in each sector. The IEF can also be helpful in identifying industry sectors that could be encouraged to expand within the Beijing region as they are especially efficient in terms of value adding relative to IEF. Other industries, over time, may be better located in other regions that do not face the same environmental pressures as Beijing. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. The ionospheric eclipse factor method (IEFM) and its application to determining the ionospheric delay for GPS

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Tscherning, C. C.; Knudsen, P.; Xu, G.; Ou, J.

    2008-01-01

    A new method for modeling the ionospheric delay using global positioning system (GPS) data is proposed, called the ionospheric eclipse factor method (IEFM). It is based on establishing a concept referred to as the ionospheric eclipse factor (IEF) λ of the ionospheric pierce point (IPP) and the IEF’s influence factor (IFF) bar{λ}. The IEF can be used to make a relatively precise distinction between ionospheric daytime and nighttime, whereas the IFF is advantageous for describing the IEF’s variations with day, month, season and year, associated with seasonal variations of total electron content (TEC) of the ionosphere. By combining λ and bar{λ} with the local time t of IPP, the IEFM has the ability to precisely distinguish between ionospheric daytime and nighttime, as well as efficiently combine them during different seasons or months over a year at the IPP. The IEFM-based ionospheric delay estimates are validated by combining an absolute positioning mode with several ionospheric delay correction models or algorithms, using GPS data at an international Global Navigation Satellite System (GNSS) service (IGS) station (WTZR). Our results indicate that the IEFM may further improve ionospheric delay modeling using GPS data.

  13. Simultaneous pre-concentration and separation on simple paper-based analytical device for protein analysis.

    PubMed

    Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong

    2018-02-01

    In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.

  14. Introductory guide to integrated ecological framework.

    DOT National Transportation Integrated Search

    2014-10-01

    This guide introduces the Integrated Ecological Framework (IEF) to Texas Department of Transportation : (TxDOT) engineers and planners. IEF is step-by-step approach to integrating ecological and : transportation planning with the goal of avoiding imp...

  15. Steady-state protein focusing in carrier ampholyte based isoelectric focusing: Part I-Analytical solution.

    PubMed

    Shim, Jaesool; Yoo, Kisoo; Dutta, Prashanta

    2017-03-01

    The determination of an analytical solution to find the steady-state protein concentration distribution in IEF is very challenging due to the nonlinear coupling between mass and charge conservation equations. In this study, approximate analytical solutions are obtained for steady-state protein distribution in carrier ampholyte based IEF. Similar to the work of Svensson, the final concentration profile for proteins is assumed to be Gaussian, but appropriate expressions are presented in order to obtain the effective electric field and pH gradient in the focused protein band region. Analytical results are found from iterative solutions of a system of coupled algebraic equations using only several iterations for IEF separation of three plasma proteins: albumin, cardiac troponin I, and hemoglobin. The analytical results are compared with numerically predicted results for IEF, showing excellent agreement. Analytically obtained electric field and ionic conductivity distributions show significant deviation from their nominal values, which is essential in finding the protein focusing behavior at isoelectric points. These analytical solutions can be used to determine steady-state protein concentration distribution for experiment design of IEF considering any number of proteins and ampholytes. Moreover, the model presented herein can be used to find the conductivity, electric field, and pH field. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. The isolation and characterisation of jacalin [Artocarpus heterophyllus (jackfruit) lectin] based on its charge properties.

    PubMed

    Kabir, S

    1995-02-01

    Jackfruit extracts contain a protein termed jacalin which possesses diverse biological properties. A detailed analysis of its charge properties has been lacking. The present investigation was initiated to study isoelectric properties of jacalin in detail and to isolate a single isoform of jacalin. Jacalin was isolated from jackfruit extracts by affinity chromatography on immunoglobulin-A immobilised to Sepharose 4B. Various techniques such as ion-exchange chromatography, isoelectric focusing (IEF) on polyacrylamide gels and preparative liquid IEF with the Rotofor cell were used. When analysed by IEF on thin layer polyacrylamide gels, jacalin was resolved into 35 bands over a pH range of 5.0-8.5. Upon SDS-PAGE in the second dimension all these charge species gave rise to only two-bands at 12 and 15.4 kDa. The lectin was mostly eluted with 50 and 100 mM sodium chloride when jackfruit extracts were fractionated on an anion-exchange column of DEAE-cellulose. In a single 6 hour run by preparative IEF with the Rotofor cell in the pH range of 3-9.5, it has been possible to isolate pure jacalin fractions containing fewer number of charged isomers. A single jacalin isoform was isolated by subjecting a Rotofor fraction containing fewer charged species to preparative IEF on thin layer polyacrylamide gel and eluting the band of interest from the gel. The isolated jacalin isoform was biologically active as it agglutinated erythrocytes. The study reveals the complexity of jacalin as it exists as multiple charge isomers over a broad pH range. By performing preparative IEF in solution as well as in thin layer polyacrylamide gels, it was possible to isolate a single jacalin isoform with the retention of biological activity.

  17. Analysis of human bone alkaline phosphatase isoforms: comparison of isoelectric focusing and ion-exchange high-performance liquid chromatography.

    PubMed

    Sharp, Christopher A; Linder, Cecilia; Magnusson, Per

    2007-04-01

    Several isoforms of alkaline phosphatase (ALP) can be identified in human tissues and serum after separation by anion-exchange HPLC and isoelectric focusing (IEF). We purified four soluble bone ALP (BALP) isoforms (B/I, B1x, B1 and B2) from human SaOS-2 cells, determined their specific pI values by broad range IEF (pH 3.5-9.5), compared these with commercial preparations of bone, intestinal and liver ALPs and established the effects of neuraminidase and wheat germ lectin (WGA) on enzyme activity. Whilst the isoforms B1x (pI=4.48), B1 (pI=4.32) and B2 (pI=4.12) resolved as well-defined bands, B/I resolved as a complex (pI=4.85-6.84). Neuraminidase altered the migration of all BALP isoforms to pI=6.84 and abolished their binding to the anion-exchange matrix, but increased their enzymatic activities by 11-20%. WGA precipitated the BALP isoforms in IEF gels and the HPLC column and attenuated their enzymatic activities by 54-73%. IEF resolved the commercial BALP into 2 major bands (pI=4.41 and 4.55). Migration of BALP isoforms is similar in IEF and anion-exchange HPLC and dependent on sialic acid content. HPLC is preferable in smaller scale research applications where samples containing mixtures of BALP isoforms are analysed. Circulating liver ALP (pI=3.85) can be resolved from BALP by either method. IEF represents a simpler approach for routine purposes even though some overlapping of the isoforms may occur.

  18. Rapid Protein Separations in Microfluidic Devices

    NASA Technical Reports Server (NTRS)

    Fan, Z. H.; Das, Champak; Xia, Zheng; Stoyanov, Alexander V.; Fredrickson, Carl K.

    2004-01-01

    This paper describes fabrication of glass and plastic microfluidic devices for protein separations. Although the long-term goal is to develop a microfluidic device for two-dimensional gel electrophoresis, this paper focuses on the first dimension-isoelectric focusing (IEF). A laser-induced fluorescence (LIF) imaging system has been built for imaging an entire channel in an IEF device. The whole-channel imaging eliminates the need to migrate focused protein bands, which is required if a single-point detector is used. Using the devices and the imaging system, we are able to perform IEF separations of proteins within minutes rather than hours in traditional bench-top instruments.

  19. Detection of recombinant EPO in blood and urine samples with EPO WGA MAIIA, IEF and SAR-PAGE after microdose injections.

    PubMed

    Dehnes, Yvette; Shalina, Alexandra; Myrvold, Linda

    2013-01-01

    The misuse of microdoses of performance enhancing drugs like erythropoietin (EPO) constitutes a major challenge in doping analysis. When injected intravenously, the half-life of recombinant human EPO (rhEPO) like epoetin alfa, beta, and zeta is only a few hours and hence, the window for direct detection of rhEPO in urine is small. In order to investigate the detection window for rhEPO directly in blood and urine with a combined affinity chromatography and lateral flow immunoassay (EPO WGA MAIIA), we recruited nine healthy people who each received six intravenously injected microdoses (7.5 IU/kg) of NeoRecormon (epoetin beta) over a period of three weeks. Blood and urine samples were collected in the days following the injections and analyzed with EPO WGA MAIIA as well as the current validated methods for rhEPO; isoelectric focusing (IEF) and sarcosyl polyacrylamide gel electrophoresis (SAR-PAGE). For samples collected 18 h after a microdose, the sensitivity of the EPO WGA MAIIA assay was 100% in plasma and 87.5% in urine samples at the respective 98% specificity threshold levels. In comparison, the sensitivity in plasma and urine was 75% and 100%, respectively, with IEF, and 87.5% in plasma and 100% in urine when analyzed with SAR-PAGE. We conclude that EPO WGA MAIIA is a sensitive assay for the detection of rhEPO, with the potential of being a fast, supplemental screening assay for use in doping analysis.

  20. Culture Writes the Script: On the Centrality of Context in Indigenous Evaluation

    ERIC Educational Resources Information Center

    LaFrance, Joan; Nichols, Richard; Kirkhart, Karen E.

    2012-01-01

    Context grounds all aspects of indigenous evaluation. From an indigenous evaluation framework (IEF), programs are understood within their relationship to place, setting, and community, and evaluations are planned, undertaken, and validated in relation to cultural context. This chapter describes and explains fundamental elements of IEF epistemology…

  1. Continuous-flow electro-assisted acid hydrolysis of granular potato starch via inductive methodology.

    PubMed

    Li, Dandan; Yang, Na; Jin, Yamei; Guo, Lunan; Zhou, Yuyi; Xie, Zhengjun; Jin, Zhengyu; Xu, Xueming

    2017-08-15

    The induced electric field assisted hydrochloric acid (IEF-HCl) hydrolysis of potato starch was investigated in a fluidic system. The impact of various reaction parameters on the hydrolysis rate, including reactor number (1-4), salt type (KCl, MgCl 2 , FeCl 3 ), salt concentration (3-12%), temperature (40-55°C), and hydrolysis time (0-60h), were comprehensively assessed. Under optimal conditions, the maximum reducing sugar content in the hydrolysates was 10.59g/L. X-ray diffraction suggested that the crystallinity of IEF-HCl-modified starches increased with the intensification of hydrolysis but was lower than that of native starch. Scanning electron microscopy indicated that the surface and interior regions of starch granules were disrupted by the hydrolysis. The solubility of IEF-HCl-modified starches increased compared to native starch while their swelling power decreased, contributing to a decline in paste viscosity. These results suggest that IEF is a notable potential electrotechnology to conventional hydrolysis under mild conditions without any electrode touching the subject. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Possible external sources of terrestrial cloud cover variability: the solar wind

    NASA Astrophysics Data System (ADS)

    Voiculescu, Mirela; Usoskin, Ilya; Condurache-Bota, Simona

    2014-05-01

    Cloud cover plays an important role in the terrestrial radiation budget. The possible influence of the solar activity on cloud cover is still an open question with contradictory answers. An extraterrestrial factor potentially affecting the cloud cover is related to fields associated with solar wind. We focus here on a derived quantity, the interplanetary electric field (IEF), defined as the product between the solar wind speed and the meridional component, Bz, of the interplanetary magnetic field (IMF) in the Geocentric Solar Magnetospheric (GSM) system. We show that cloud cover at mid-high latitudes systematically correlates with positive IEF, which has a clear energetic input into the atmosphere, but not with negative IEF, in general agreement with predictions of the global electric circuit (GEC)-related mechanism. Since the IEF responds differently to solar activity than, for instance, cosmic ray flux or solar irradiance, we also show that such a study allows distinguishing one solar-driven mechanism of cloud evolution, via the GEC, from others. We also present results showing that the link between cloud cover and IMF varies depending on composition and altitude of clouds.

  3. Decomplexation efficiency and mechanism of Cu(II)-EDTA by H2O2 coupled internal micro-electrolysis process.

    PubMed

    Zhou, Dongfang; Hu, Yongyou; Guo, Qian; Yuan, Weiguang; Deng, Jiefan; Dang, Yapan

    2016-12-29

    Internal micro-electrolysis (IE) coupled with Fenton oxidation (IEF) was a very effective technology for copper (Cu)-ethylenediaminetetraacetic acid (EDTA) wastewater treatment. However, the mechanisms of Cu 2+ removal and EDTA degradation were scarce and lack persuasion in the IEF process. In this paper, the decomplexation and removal efficiency of Cu-EDTA and the corresponding mechanisms during the IEF process were investigated by batch test. An empirical equation and the oxidation reduction potential (ORP) index were proposed to flexibly control IE and the Fenton process, respectively. The results showed that Cu 2+ , total organic carbon (TOC), and EDTA removal efficiencies were 99.6, 80.3, and 83.4%, respectively, under the proper operation conditions of iron dosage of 30 g/L, Fe/C of 3/1, initial pH of 3.0, Fe 2+ /H 2 O 2 molar ratio of 1/4, and reaction time of 20 min, respectively for IE and the Fenton process. The contributions of IE and Fenton to Cu 2+ removal were 91.2 and 8.4%, respectively, and those to TOC and EDTA removal were 23.3, 25.1, and 57, 58.3%, respectively. It was found that Fe 2+ -based replacement-precipitation and hydroxyl radical (•OH) were the most important effects during the IEF process. •OH played an important role in the degradation of EDTA, whose yield and productive rate were 3.13 mg/L and 0.157 mg/(L min -1 ), respectively. Based on the intermediates detected by GC-MS, including acetic acid, propionic acid, pentanoic acid, amino acetic acid, 3-(diethylamino)-1,2-propanediol, and nitrilotriacetic acid (NTA), a possible degradation pathway of Cu-EDTA in the IEF process was proposed. Graphical abstract The mechanism diagram of IEF process.

  4. A commercial isoelectric focusing apparatus for use in microgravity

    NASA Astrophysics Data System (ADS)

    Johnson, Jerald F.; Dandy, Jonathan S.; Johnson, Terry C.

    2000-01-01

    A series of studies have tested the possibility that the microgravity environment may be superior to laboratories on earth for several biomedical applications. One such application is isoelectric focusing (IEF). The purpose of our research is to design, build, test, and employ an analytical IEF instrument for use in the laboratory on the International Space Station (ISS) and to demonstrate the advantages of space-based IEF. This paper describes IEF in general, discusses the design considerations that arise for IEF in low-gravity, and presents design solutions to some of the systems under development. Isoelectric focusing is a powerful technique that has applications for both analytical analysis the preparative purification of macromolecules. IEF resolves proteins by net charge separation, in either liquid or semi-solid substrates, where the molecules migrate to their isoelectric point (pI). In earth-based IEF, separation media are usually semi-solids such as polyacrylamide and agarose gels. The matrix structure of these media is used to offset the gravity-induced diffusion and convection that occurs in free solutions. With these effects being greatly reduced, a free solution could be used as a superior media. Because diffusion in liquids is reduced in microgravity (Snyder, 1986), a given electrical field should result in more tightly focused bands. This would allow for the separation of proteins that have very closely spaced pI's. If superior results are achieved, there are numerous pharmaceutical and genetic engineering companies that would take advantage of this unique development. The design of the Commercial IsoElectric Focusing Apparatus (CIEFA) presents several significant engineering challenges specific to its operation in the microgravity environment. Three difficulties of particular importance are gases generated through electrolysis, temperature control and verification of protein separation. Gases generated through electrolysis must be isolated from electrodes to prevent current limiting. Special measures for temperature control must be made due to the absence of gravity-induced convective heat flow. In order for the experiment results to be examined, some mechanism must be in place to either document or preserve the protein bands. Preliminary testing aboard the space shuttle requires that the CIEFA be compatible with the shuttle's middeck locker. This requirement poses limits in the physical parameters of size, mass, power consumption, and heat generation. In addition, the design must be NASA certifiable for shuttle flight. This diverse list of design obstacles requires integration of biological, electrical, and mechanical solutions. .

  5. Hormone Purification by Isoelectric Focusing

    NASA Technical Reports Server (NTRS)

    Bier, M.

    1985-01-01

    Various ground-based research approaches are being applied to a more definitive evaluation of the natures and degrees of electroosmosis effects on the separation capabilities of the Isoelectric Focusing (IEF) process. A primary instrumental system for this work involves rotationally stabilized, horizontal electrophoretic columns specially adapted for the IEF process. Representative adaptations include segmentation, baffles/screens, and surface coatings. Comparative performance and development testing are pursued against the type of column or cell established as an engineering model. Previously developed computer simulation capabilities are used to predict low-gravity behavior patterns and performance for IEF apparatus geometries of direct project interest. Three existing mathematical models plus potential new routines for particular aspects of simulating instrument fluid patterns with varied wall electroosmosis influences are being exercised.

  6. Subclassification of fatty liver by its pathogenesis: cIEFing is believing.

    PubMed

    Byrne, Frances L; Hoehn, Kyle L

    2016-05-01

    Fatty liver, also termed hepatic steatosis or fatty liver disease, is a condition characterized by excess fat accumulation in the liver. Common causes of fatty liver include obesity, ageing, medications, genetic disorders, viral hepatitis, excess alcohol or toxins. This diversity in pathogenesis is matched by an equally diverse spectrum of consequences, whereby some individuals remain asymptomatic yet others progress through a series of inflammatory, fibrotic and metabolic disorders that can lead to liver failure, cancer or diabetes. Current treatment approaches for fatty liver do not differ by disease aetiology and primarily involve weight loss strategies or management of co-morbidities. In a recent paper published in this journal, Urasaki et al used capillary isoelectric focusing (cIEF) to create profiles of protein post-translational modifications that distinguish four different models of fatty liver in mice. Importantly, this new cIEF approach has the potential to provide rapid individualized diagnosis of fatty liver pathogenesis that may enable more accurate and personalized treatment strategies. Further testing and optimization of cIEF as a diagnostic screening tool in humans is warranted. Copyright © 2016 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  7. A versatile semi-permanent sequential bilayer/diblock polymer coating for capillary isoelectric focusing.

    PubMed

    Bahnasy, Mahmoud F; Lucy, Charles A

    2012-12-07

    A sequential surfactant bilayer/diblock copolymer coating was previously developed for the separation of proteins. The coating is formed by flushing the capillary with the cationic surfactant dioctadecyldimethylammonium bromide (DODAB) followed by the neutral polymer poly-oxyethylene (POE) stearate. Herein we show the method development and optimization for capillary isoelectric focusing (cIEF) separations based on the developed sequential coating. Electroosmotic flow can be tuned by varying the POE chain length which allows optimization of resolution and analysis time. DODAB/POE 40 stearate can be used to perform single-step cIEF, while both DODAB/POE 40 and DODAB/POE 100 stearate allow performing two-step cIEF methodologies. A set of peptide markers is used to assess the coating performance. The sequential coating has been applied successfully to cIEF separations using different capillary lengths and inner diameters. A linear pH gradient is established only in two-step CIEF methodology using 3-10 pH 2.5% (v/v) carrier ampholyte. Hemoglobin A(0) and S variants are successfully resolved on DODAB/POE 40 stearate sequentially coated capillaries. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Space-time evolution of cataclasis in carbonate fault zones

    NASA Astrophysics Data System (ADS)

    Ferraro, Francesco; Grieco, Donato Stefano; Agosta, Fabrizio; Prosser, Giacomo

    2018-05-01

    The present contribution focuses on the micro-mechanisms associated to cataclasis of both calcite- and dolomite-rich fault rocks. This work combines field and laboratory data of carbonate fault cores currently exposed in central and southern Italy. By first deciphering the main fault rock textures, their spatial distribution, crosscutting relationships and multi-scale dimensional properties, the relative timing of Intragranular Extensional Fracturing (IEF), chipping, and localized shear is inferred. IEF was predominant within already fractured carbonates, forming coarse and angular rock fragments, and likely lasted for a longer period within the dolomitic fault rocks. Chipping occurred in both lithologies, and was activated by grain rolling forming minute, sub-rounded survivor grains embedded in a powder-like carbonate matrix. The largest fault zones, which crosscut either limestones or dolostones, were subjected to localized shear and, eventually, to flash temperature increase which caused thermal decomposition of calcite within narrow (cm-thick) slip zones. Results are organized in a synoptic panel including the main dimensional properties of survivor grains. Finally, a conceptual model of the time-dependent evolution of cataclastic deformation in carbonate rocks is proposed.

  9. Identification of fish species after cooking by SDS-PAGE and urea IEF: a collaborative study.

    PubMed

    Etienne, M; Jérôme, M; Fleurence, J; Rehbein, H; Kündiger, R; Mendes, R; Costa, H; Pérez-Martín, R; Piñeiro-González, C; Craig, A; Mackie, I; Malmheden Yman, I; Ferm, M; Martínez, I; Jessen, F; Smelt, A; Luten, J

    2000-07-01

    A collaborative study, to validate the use of SDS-PAGE and urea IEF, for the identification of fish species after cooking has been performed by nine laboratories. By following optimized standard operation procedures, 10 commercially important species (Atlantic salmon, sea trout, rainbow trout, turbot, Alaska pollock, pollack, pink salmon, Arctic char, chum salmon, and New Zealand hake) had to be identified by comparison with 22 reference samples. Some differences in the recoveries of proteins from cooked fish flesh were noted between the urea and the SDS extraction procedures used. Generally, the urea extraction procedure appears to be less efficient than the SDS extraction for protein solubilization. Except for some species belonging to the Salmonidae family (Salmo, Oncorhynchus), both of the analytical techniques tested (urea IEF, SDS-PAGE) enabled identification of the species of the samples to be established. With urea IEF, two laboratories could not differentiate Salmo salar from Salmo trutta. The same difficulties were noted for differentiation between Oncorhynchus gorbuscha and Oncorhynchus keta samples. With SDS-PAGE, three laboratories had some difficulties in identifying the S. trutta samples. However, in the contrast with the previous technique, SDS-PAGE allows the characterization of most of the Oncorhynchus species tested. Only Oncorhynchus mykiss was not clearly recognized by one laboratory. Therefore, SDS-PAGE (Excel gel homogeneous 15%) appears to be better for the identification, after cooking, of fish such as the tuna and salmon species which are characterized by neutral and basic protein bands, and urea IEF (CleanGel) is better for the gadoid species, which are characterized by acid protein bands (parvalbumins). Nevertheless, in contentious cases it is preferable to use both analytical methods.

  10. β-Globin gene sequencing of hemoglobin Austin revises the historically reported electrophoretic migration pattern.

    PubMed

    Racsa, Lori D; Luu, Hung S; Park, Jason Y; Mitui, Midori; Timmons, Charles F

    2014-06-01

    Hemoglobin (Hb) Austin was defined in 1977, using amino acid sequencing of samples from 3 unrelated Mexican-Americans, as a substitution of serine for arginine at position 40 of the β-globin chain (Arg40Ser). Its electrophoretic migration on both cellulose acetate (pH 8.4) and citrate agar (pH 6.2) was reported between Hb F and Hb A, and this description persists in reference literature. OBJECTIVES.-To review the clinical features and redefine the diagnostic characteristics of Hb Austin. Eight samples from 6 unrelated individuals and 2 siblings, all with Hispanic surnames, were submitted for abnormal Hb identification between June 2010 and September 2011. High-performance liquid chromatography, isoelectric focusing (IEF), citrate agar electrophoresis, and bidirectional DNA sequencing of the entire β-globin gene were performed. DNA sequencing confirmed all 8 individuals to be heterozygous for Hb Austin (Arg40Ser). Retention time on high-performance liquid chromatography and migration on citrate agar electrophoresis were consistent with that identification. Migration on IEF, however, was not between Hb F and Hb A, as predicted from the report of cellulose acetate electrophoresis. By IEF, Hb Austin migrated anodal to ("faster than") Hb A. Hemoglobin Austin (Arg40Ser) appears on IEF as a "fast," anodally migrating, Hb variant, just as would be expected from its amino acid substitution. The cited historic report is, at best, not applicable to IEF and is probably erroneous. Our observation of 8 cases in 16 months suggests that this variant may be relatively common in some Hispanic populations, making its recognition important. Furthermore, gene sequencing is proving itself a powerful and reliable tool for definitive identification of Hb variants.

  11. Isoelectric focusing-affinity immunoblot analysis of mouse monoclonal antibodies to the four human IgG subclasses

    NASA Technical Reports Server (NTRS)

    Hamilton, Robert G.; Roebber, Marianne; Rodkey, L. Scott; Reimer, Charles B.

    1987-01-01

    Isoelectric focusing (IEF)/affinity immunoblotting and enzyme-linked immunosorbent assay (ELISA) were used for parallel analysis of murine monoclonal antihuman IgG-subclass antisera (MoAbs). Coomassie Blue-stained protein bands in the pH region 5.5-8.0 were shown to be murine IgG by direct blotting onto nitrocellulose followed by detection with conjugated antimouse IgG. Use of IgG myeloma antigen-coated nitrocellulose in the IEF-affinity immunoblot allowed detection of the charge microheterogeneity of MoAbs. The MoAb group contained one to five major dense bands flanked by up to four minor fainter bands, all with pIs ranging from 6.1 to 7.8. Semiquantitative estimates of binding specificity in the IEF-affinity blot compared well with cross-reactivity data obtained from a quantitative ELISA.

  12. Integration of isoelectric focusing with parallel sodium dodecyl sulfate gel electrophoresis for multidimensional protein separations in a plastic microfluidic [correction of microfludic] network.

    PubMed

    Li, Yan; Buch, Jesse S; Rosenberger, Frederick; DeVoe, Don L; Lee, Cheng S

    2004-02-01

    An integrated protein concentration/separation system, combining non-native isoelectric focusing (IEF) with sodium dodecyl sulfate (SDS) gel electrophoresis on a polymer microfluidic chip, is reported. The system provides significant analyte concentration and extremely high resolving power for separated protein mixtures. The ability to introduce and isolate multiple separation media in a plastic microfluidic network is one of two key requirements for achieving multidimensional protein separations. The second requirement lies in the quantitative transfer of focused proteins from the first to second separation dimensions without significant loss in the resolution acquired from the first dimension. Rather than sequentially sampling protein analytes eluted from IEF, focused proteins are electrokinetically transferred into an array of orthogonal microchannels and further resolved by SDS gel electrophoresis in a parallel and high-throughput format. Resolved protein analytes are monitored using noncovalent, environment-sensitive, fluorescent probes such as Sypro Red. In comparison with covalently labeling proteins, the use of Sypro staining during electrophoretic separations not only presents a generic detection approach for the analysis of complex protein mixtures such as cell lysates but also avoids additional introduction of protein microheterogeneity as the result of labeling reaction. A comprehensive 2-D protein separation is completed in less than 10 min with an overall peak capacity of approximately 1700 using a chip with planar dimensions of as small as 2 cm x 3 cm. Significant enhancement in the peak capacity can be realized by simply raising the density of microchannels in the array, thereby increasing the number of IEF fractions further analyzed in the size-based separation dimension.

  13. Drugs and Mental Health Problems among the Roma: Protective Factors Promoted by the Iglesia Evangélica Filadelfia

    PubMed Central

    López, Jelen Amador; García, Ramón Flecha; Martí, Teresa Sordé

    2018-01-01

    Background: High incidences of drug consumption and mental health problems are found among the Roma population in Spain, a reality that remains understudied. Past studies have indicated the positive role played by the Iglesia Evangélica Filadelfia (IEF) in promoting rehabilitation and prevention of these practices. Objective: In this article, authors analyze in which ways the IEF favors processes of drug rehabilitation and mental health recovery as well as the prevention of these problems among its Roma members. Methods: A communicative qualitative approach was developed. It was communicative because new knowledge was created by dialogically contrasting the existing state of the art with study participants. It was qualitative because everyday life stories were collected, gathering the experiences, perceptions and interpretations of Roma people who are actively involved in three different IEF churches based in Barcelona. Results: This article identifies these protective factors: anti-drug discourse, a supportive environment, new social relations, role model status, the promotion of interactions, the revaluation of oneself, spiritual activities and the improvement of the feeling of belonging and the creation of meaning. Conclusion: The present research contributes new evidence to the current understanding of the role played by the IEF in improving Roma health status and how the identified protective factors can contribute to rehabilitation and recovery from such problems in other contexts. PMID:29443877

  14. Drugs and Mental Health Problems among the Roma: Protective Factors Promoted by the Iglesia Evangélica Filadelfia.

    PubMed

    López, Jelen Amador; García, Ramón Flecha; Martí, Teresa Sordé

    2018-02-14

    Background: High incidences of drug consumption and mental health problems are found among the Roma population in Spain, a reality that remains understudied. Past studies have indicated the positive role played by the Iglesia Evangélica Filadelfia (IEF) in promoting rehabilitation and prevention of these practices. Objective: In this article, authors analyze in which ways the IEF favors processes of drug rehabilitation and mental health recovery as well as the prevention of these problems among its Roma members. Methods: A communicative qualitative approach was developed. It was communicative because new knowledge was created by dialogically contrasting the existing state of the art with study participants. It was qualitative because everyday life stories were collected, gathering the experiences, perceptions and interpretations of Roma people who are actively involved in three different IEF churches based in Barcelona. Results: This article identifies these protective factors: anti-drug discourse, a supportive environment, new social relations, role model status, the promotion of interactions, the revaluation of oneself, spiritual activities and the improvement of the feeling of belonging and the creation of meaning. Conclusion: The present research contributes new evidence to the current understanding of the role played by the IEF in improving Roma health status and how the identified protective factors can contribute to rehabilitation and recovery from such problems in other contexts.

  15. Potencies of red seabream AHR1- and AHR2-mediated transactivation by dioxins: implication of both AHRs in dioxin toxicity.

    PubMed

    Bak, Su-Min; Iida, Midori; Hirano, Masashi; Iwata, Hisato; Kim, Eun-Young

    2013-03-19

    To evaluate species- and isoform-specific responses to dioxins and related compounds (DRCs) via aryl hydrocarbon receptor (AHR) in the red seabream ( Pagrus major ), we constructed a reporter gene assay system. Each expression plasmid of red seabream AHR1 (rsAHR1) and AHR2 (rsAHR2) together with a reporter plasmid containing red seabream CYP1A 5'-flanking region were transfected into COS-7 cells. The cells were treated with graded concentrations of seven DRC congeners including 2,3,7,8-TCDD, 1,2,3,7,8-PeCDD, 1,2,3,4,7,8-HxCDD, 2,3,7,8-TCDF, 2,3,4,7,8-PeCDF, 1,2,3,4,7,8-HxCDF, and PCB126. Both rsAHR1 and rsAHR2 exhibited dose-dependent responses for all the tested congeners. The rsAHR isoform-specific TCDD induction equivalency factors (rsAHR1- and rsAHR2-IEFs) were calculated on the basis of 2,3,7,8-TCDD relative potency derived from the dose-response of each congener. The rsAHR1-IEFs of PeCDD, HxCDD, TCDF, PeCDF, and HxCDF were estimated as 0.17, 0.29, 2.5, 1.5, and 0.27, respectively. For PCB126, no rsAHR1-IEF was given because of less than 10% 2,3,7,8-TCDD maximum response. The rsAHR2-IEFs of PeCDD, HxCDD, TCDF, PeCDF, HxCDF, and PCB126 were estimated as 0.38, 0.13, 1.5, 0.93, 0.20, and 0.0085, respectively. The rsAHR1/2-IEF profiles were different from WHO toxic equivalency factors for fish. In silico docking simulations supported that both rsAHRs have potentials to bind to these congeners. These results suggest that dioxin toxicities may be mediated by both rsAHRs in red seabreams.

  16. Proteoform analysis of lipocalin-type prostaglandin D-synthase from human cerebrospinal fluid by isoelectric focusing and superficially porous liquid chromatography with Fourier transform mass spectrometry.

    PubMed

    Zhang, Junmei; Corbett, John R; Plymire, Daniel A; Greenberg, Benjamin M; Patrie, Steven M

    2014-05-01

    Lipocalin-type prostaglandin D-synthase (L-PGDS) in cerebrospinal fluid contributes to the maturation and maintenance of the CNS. L-PGDS PTMs may contribute to pathobiology of different CNS diseases, but methods to monitor its proteoforms are limited. Herein, we combined off-gel IEF and superficially porous LC (SPLC) with Fourier transform MS to characterize common cerebrospinal fluid L-PGDS proteoforms. Across 3D physiochemical space (pI, hydrophobicity, and mass), 217 putative proteoforms were observed from 21 to 24 kDa and pI 5-10. Glycoprotein accurate mass information, combined with MS/MS analysis of peptides generated from 2D-fractionated proteoforms, enabled the putative assignment of 208 proteoforms with varied PTM positional occupants. Fifteen structurally related N-glycans at N29 and N56 were observed, with different N-glycan compositional variants being preferred on each amino acid. We also observed that sialic acid content was a major factor for pI shifts between L-PGDS proteoforms. Other putative PTMs characterized include a core-1 HexHexNAc-O-glycan at S7, acetylation at K16 and K138, sulfonation at S41 and T142, and dioxidation at C43 and C145. The IEF-SPLC-MS platform presented provides 30-40× improved peak capacity versus conventional 2DE and shows potential for repeatable proteoform analysis of surrogate PTM-based biomarkers. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Density functional theory study on carbon dioxide absorption into aqueous solutions of 2-amino-2-methyl-1-propanol using a continuum solvation model.

    PubMed

    Yamada, Hidetaka; Matsuzaki, Yoichi; Higashii, Takayuki; Kazama, Shingo

    2011-04-14

    We used density functional theory (DFT) calculations with the latest continuum solvation model (SMD/IEF-PCM) to determine the mechanism of CO(2) absorption into aqueous solutions of 2-amino-2-methyl-1-propanol (AMP). Possible absorption process reactions were investigated by transition-state optimization and intrinsic reaction coordinate (IRC) calculations in the aqueous solution at the SMD/IEF-PCM/B3LYP/6-31G(d) and SMD/IEF-PCM/B3LYP/6-311++G(d,p) levels of theory to determine the absorption pathways. We show that the carbamate anion forms by a two-step reaction via a zwitterion intermediate, and this occurs faster than the formation of the bicarbonate anion. However, we also predict that the carbamate readily decomposes by a reverse reaction rather than by hydrolysis. As a result, the final product is dominated by the thermodynamically stable bicarbonate anion that forms from AMP, H(2)O, and CO(2) in a single-step termolecular reaction.

  18. Cross-reactivity between pollen extracts from six artemisia species.

    PubMed

    Brandys, J; Grimsøen, A; Nilsen, B M; Paulsen, B S; Park, H S; Hong, C S

    1993-06-01

    Pollen extracts of six different ARTEMISIA species, A. VULGARIS, A. SCOPARIA, A. PRINCEPS, A. TRIDENTATA, A. ANNUA, and A. CAMPESTRIS were compared using SDS-PAGE, IEF, immunoblotting, and immunoelectrophoretic methods. The band patterns obtained after SDS-PAGE and IEF showed a large degree of similarity between the extracts. Immunoblotting of these gels using a pool of sera from patients allergic to A. VULGARIS gave essentially the same IgE-binding band pattern with all the extracts, demonstrating an extensive degree of cross-reactivity between A. VULGARIS and the other ARTEMISIA species. FRIE using a polyspecific antiserum against A. VULGARIS showed that all the extracts contained several antigens that were immunologically identical to antigens in A. VULGARIS extract. Antigens showing immunological identity to the important A. VULGARIS allergens Ag 12 and ART V II were present in all the extracts. The cross-reactivity between A. VULGARIS and A. PRINCEPS was further verified by screening of ten Korean and nine Norwegian individual patient sera against extracts of both species in SDS-PAGE or IEF immunoblotting. Both groups of patients had essentially the same pattern of reactivity towards both pollen extracts.

  19. Dryout-type critical heat flux in vertical upward annular flow: effects of entrainment rate, initial entrained fraction and diameter

    NASA Astrophysics Data System (ADS)

    Wu, Zan; Wadekar, Vishwas; Wang, Chenglong; Sunden, Bengt

    2018-01-01

    This study aims to reveal the effects of liquid entrainment, initial entrained fraction and tube diameter on liquid film dryout in vertical upward annular flow for flow boiling. Entrainment and deposition rates of droplets were included in mass conservation equations to estimate the local liquid film mass flux in annular flow, and the critical vapor quality at dryout conditions. Different entrainment rate correlations were evaluated using flow boiling data of water and organic liquids including n-pentane, iso-octane and R134a. Effect of the initial entrained fraction (IEF) at the churn-to-annular flow transition was also investigated. A transitional Boiling number was proposed to separate the IEF-sensitive region at high Boiling numbers and the IEF-insensitive region at low Boiling numbers. Besides, the diameter effect on dryout vapor quality was studied. The dryout vapor quality increases with decreasing tube diameter. It needs to be pointed out that the dryout characteristics of submillimeter channels might be different because of different mechanisms of dryout, i.e., drying of liquid film underneath long vapor slugs and flow boiling instabilities.

  20. Separation of similar yeast strains by IEF techniques.

    PubMed

    Horká, Marie; Růzicka, Filip; Holá, Veronika; Slais, Karel

    2009-06-01

    Rapid and reliable identification of the etiological agents of infectious diseases, especially species that are hardly distinguishable by routinely used laboratory methods, e.g. Candida albicans from C. dubliniensis, is necessary for early administration of an appropriate therapy. Similarly, the differentiation between biofilm-positive and biofilm-negative yeast strains is necessary for the choice of a therapeutic strategy due to higher resistance of the biofilm-positive strains to antifungals. In this study rapid separation and identification of similar strains of Candida, cells and/or their lysates, based on IEF are outlined. The isoelectric points of the monitored "similar pairs" of Candidas, C. albicans and C. dubliniensis and the biofilm-positive C. parapsilosis, C. tropicalis and their biofilm-negative strains were determined by CIEF with UV detection in the acidic pH gradient. The differences between their isoelectric points were up to 0.3 units of pI. Simultaneously, a fast and a simple technique was developed for the lysis of the outer membrane cell and characteristic fingerprints were found in lysate electrophoreograms and in gels from the capillary or the gel IEF, respectively.

  1. The interplanetary electric field, cleft currents and plasma convection in the polar caps

    NASA Technical Reports Server (NTRS)

    Banks, P. M.; Clauer, C. R.; Araki, T.; St. Maurice, J. P.; Foster, J. C.

    1984-01-01

    The relationship between the pattern of plasma convection in the polar cleft and the dynamics of the interplanetary electric field (IEF) is examined theoretically. It is shown that owing to the geometrical properties of the magnetosphere, the East-West component of the IEF will drive field-aligned currents which connect to the ionosphere at points lying on either side of noon, while currents associated with the North-South component of the IEF will connect the two polar caps as sheet currents, also centered at 12 MLT. In order to describe the consequences of the Interplanetary Magnetic Field (IMF) effects upon high-latitude electric fields and convection patterns, a series of numerical simulations was carried out. The simulations were based on a solution to the steady-state equation of current continuity in a height-integrated ionospheric current. The simulations demonstrate that a simple hydrodynamical model can account for the narrow 'throats' of strong dayside antisunward convection observed during periods of southward interplanetary IMF drift, as well as the sunward convection observed during periods of strongly northward IMF drift.

  2. Theoretical studies in isoelectric focusing. [mathematical modeling and computer simulation for biologicals purification process

    NASA Technical Reports Server (NTRS)

    Mosher, R. A.; Palusinski, O. A.; Bier, M.

    1982-01-01

    A mathematical model has been developed which describes the steady state in an isoelectric focusing (IEF) system with ampholytes or monovalent buffers. The model is based on the fundamental equations describing the component dissociation equilibria, mass transport due to diffusion and electromigration, electroneutrality, and the conservation of charge. The validity and usefulness of the model has been confirmed by using it to formulate buffer systems in actual laboratory experiments. The model has been recently extended to include the evolution of transient states not only in IEF but also in other modes of electrophoresis.

  3. Research and Implementation of Key Technologies in Multi-Agent System to Support Distributed Workflow

    NASA Astrophysics Data System (ADS)

    Pan, Tianheng

    2018-01-01

    In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.

  4. Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.

    PubMed

    Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor

    2016-01-01

    In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.

  5. Hermes: Seamless delivery of containerized bioinformatics workflows in hybrid cloud (HTC) environments

    NASA Astrophysics Data System (ADS)

    Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.

    Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.

  6. Transactivation potencies of the Baikal seal (Pusa sibirica) peroxisome proliferator-activated receptor α by perfluoroalkyl carboxylates and sulfonates: estimation of PFOA induction equivalency factors.

    PubMed

    Ishibashi, Hiroshi; Kim, Eun-Young; Iwata, Hisato

    2011-04-01

    The present study assessed the transactivation potencies of the Baikal seal (Pusa sibirica) peroxisome proliferator-activated receptor α (BS PPARα) by perfluorochemicals (PFCs) having various carbon chain lengths (C4-C12) using an in vitro reporter gene assay. Among the twelve PFCs treated with a range of 7.8-250 μM concentration, eight perfluoroalkyl carboxylates (PFCAs) and two perfluoroalkyl sulfonates (PFSAs) induced BS PPARα-mediated transcriptional activities in a dose-dependent manner. To compare the BS PPARα transactivation potencies of PFCs, the present study estimated the PFOA induction equivalency factors (IEFs), a ratio of the 50% effective concentration of PFOA to the concentration of each compound that can induce the response corresponding to 50% of the maximal response of PFOA. The order of IEFs for the PFCs was as follows: PFOA (IEF: 1)>PFHpA (0.89)>PFNA (0.61)>PFPeA (0.50)>PFHxS (0.41)>PFHxA (0.38)≈PFDA (0.37)>PFBA (0.26)=PFOS (0.26)>PFUnDA (0.15)≫PFDoDA and PFBuS (not activated). The structure-activity relationship analysis showed that PFCAs having more than seven perfluorinated carbons had a negative correlation (r=-1.0, p=0.017) between the number of perfluorinated carbons and the IEF of PFCAs, indicating that the number of perfluorinated carbon of PFCAs is one of the factors determining the transactivation potencies of the BS PPARα. The analysis also indicated that PFCAs were more potent than PFSAs with the same number of perfluorinated carbons. Treatment with a mixture of ten PFCs showed an additive action on the BS PPARα activation. Using IEFs of individual PFCs and hepatic concentrations of PFCs in the liver of wild Baikal seals, the PFOA induction equivalents (IEQs, 5.3-58 ng IEQ/g wet weight) were calculated. The correlation analysis revealed that the hepatic total IEQs showed a significant positive correlation with the hepatic expression levels of cytochrome P450 4A-like protein (r=0.53, p=0.036). This suggests that our approach may be useful for assessing the potential PPARα-mediated biological effects of complex mixtures of PFCs in wild Baikal seal population.

  7. msCompare: A Framework for Quantitative Analysis of Label-free LC-MS Data for Comparative Candidate Biomarker Studies*

    PubMed Central

    Hoekman, Berend; Breitling, Rainer; Suits, Frank; Bischoff, Rainer; Horvatovich, Peter

    2012-01-01

    Data processing forms an integral part of biomarker discovery and contributes significantly to the ultimate result. To compare and evaluate various publicly available open source label-free data processing workflows, we developed msCompare, a modular framework that allows the arbitrary combination of different feature detection/quantification and alignment/matching algorithms in conjunction with a novel scoring method to evaluate their overall performance. We used msCompare to assess the performance of workflows built from modules of publicly available data processing packages such as SuperHirn, OpenMS, and MZmine and our in-house developed modules on peptide-spiked urine and trypsin-digested cerebrospinal fluid (CSF) samples. We found that the quality of results varied greatly among workflows, and interestingly, heterogeneous combinations of algorithms often performed better than the homogenous workflows. Our scoring method showed that the union of feature matrices of different workflows outperformed the original homogenous workflows in some cases. msCompare is open source software (https://trac.nbic.nl/mscompare), and we provide a web-based data processing service for our framework by integration into the Galaxy server of the Netherlands Bioinformatics Center (http://galaxy.nbic.nl/galaxy) to allow scientists to determine which combination of modules provides the most accurate processing for their particular LC-MS data sets. PMID:22318370

  8. Agile parallel bioinformatics workflow management using Pwrake.

    PubMed

    Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro

    2011-09-08

    In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.

  9. Agile parallel bioinformatics workflow management using Pwrake

    PubMed Central

    2011-01-01

    Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows. PMID:21899774

  10. Affinity immunoblotting - High resolution isoelectric focusing analysis of antibody clonotype distribution

    NASA Technical Reports Server (NTRS)

    Knisley, Keith A.; Rodkey, L. Scott

    1986-01-01

    A sensitive and specific method is proposed for the analysis of specific antibody clonotype changes occurring during an immune response and for comparing multiple sera for antibody clonotype similarities. Polyclonal serum antibodies separated by isoelectric focusing (IEF) were analyzed by an affinity immunoblotting method using antigen-coated nitrocellulose membranes. Antibodies present on the surface of the acrylamide gels following IEF bind the antigen on the nitrocellulose when the coated nitrocellulose is laid over the gels. The technique has been used to analyze Ig clonotypes specific for five protein antigens and two carbohydrate antigens. Optimal antigen concentrations for coating the nitrocellulose membranes were found to range from 10-100 microgram/ml.

  11. Design Considerations for Clean QED Fusion Propulsion Systems

    NASA Astrophysics Data System (ADS)

    Bussard, Robert W.; Jameson, Lorin W.

    1994-07-01

    The direct production of electric power appears possible from fusion reactions between fuels whose products consist solely of charged particles and thus do not present radiation hazards from energetic neutron production, as do reactions involving deuteron-bearing fuels. Among these are the fuels p, 11B, 3He, and 6Li. All of these can be ``burned'' in inertial-electrostatic-fusion (IEF) devices to power QED fusion-electric rocket engines. These IEF sources provide direct-converted electrical power at high voltage (MeV) to drive e-beams for efficient propellant heating to extreme temperatures, with resulting high specific impulse performance capabilities. IEF/QED engine systems using p11B can outperform all other advanced concepts for controlled fusion propulsion by 2-3 orders of magnitude, while 6Li6Li fusion yields one order of magnitude less advance. Either of these fusion rocket propulsion systems can provide very rapid transit for solar system missions, with high payload fractions in single-stage vehicles. The 3He3He reaction can not be used practically for direct electric conversion because of the wide spread in energy of its fusion products. However, it may eventually prove useful for thermal/electrical power generation in central station power plants, or for direct-fusion-product (DFP) propellant heatingin advanced deep-space rocket engines.

  12. Caprine and ovine Greek dairy products: The official German method generates false-positive results due to κ-casein gene polymorphism.

    PubMed

    Tsartsianidou, V; Triantafillidou, D; Karaiskou, N; Tarantili, P; Triantafillidis, G; Georgakis, E; Triantafyllidis, A

    2017-05-01

    Caseins are widely used for species identification of dairy products. Isoelectric focusing (IEF) of para-κ-casein peptide is used as the official German method for the differentiation between caprine (isoform A) and ovine (isoform B) dairy products, based on their different isoelectric points. The discrimination between Greek goat and ewe dairy products using IEF has, however, been shown to be problematic because of the existence of the ewe isoform in milk from Greek indigenous dairy goats. This could be due to nucleotide polymorphisms within the goat κ-casein gene of Greek indigenous breeds, which alter the isoelectric point of the para-κ-casein peptide and lead to false positive results. Previous DNA analysis of the goat κ-casein gene has shown high levels of polymorphism; however, no such information is available for Greek indigenous dairy goats. Therefore, 87 indigenous dairy goats were sequenced at exon IV of κ-casein gene. In total, 9 polymorphic sites were detected. Three nonsynonymous point mutations were identified, which change the isoelectric point of the goat para-κ-casein peptide so that it appears identical to that of the ewe peptide. Ten composite genotypes were reconstructed and 6 of them included the problematic point mutations. For the verification of genetic results, IEF was carried out. Both goat and ewe patterns appeared in the problematic genotypes. The frequency of these genotypes could be characterized as moderate (0.23) to high (0.60) within Greek indigenous breeds. However, this is not an issue restricted to Greece, as such genotypes have been detected in various non-Greek goat breeds. In conclusion, IEF based on the official German method is certainly inappropriate for ovine and caprine discrimination concerning Greek dairy goat products, and consequently a new method should be established. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Impact on enzyme activity as a new quality index of wastewater.

    PubMed

    Balestri, Francesco; Moschini, Roberta; Cappiello, Mario; Del-Corso, Antonella; Mura, Umberto

    2013-03-15

    The aim of this study was to define a new indicator for the quality of wastewaters that are released into the environment. A quality index is proposed for wastewater samples in terms of the inertness of wastewater samples toward enzyme activity. This involves taking advantage of the sensitivity of enzymes to pollutants that may be present in the waste samples. The effect of wastewater samples on the rate of a number of different enzyme-catalyzed reactions was measured, and the results for all the selected enzymes were analyzed in an integrated fashion (multi-enzymatic sensor). This approach enabled us to define an overall quality index, the "Impact on Enzyme Function" (IEF-index), which is composed of three indicators: i) the Synoptic parameter, related to the average effect of the waste sample on each component of the enzymatic sensor; ii) the Peak parameter, related to the maximum effect observed among all the effects exerted by the sample on the sensor components; and, iii) the Interference parameter, related to the number of sensor components that are affected less than a fixed threshold value. A number of water based samples including public potable tap water, fluids from urban sewage systems, wastewater disposal from leather, paper and dye industries were analyzed and the IEF-index was then determined. Although the IEF-index cannot discriminate between different types of wastewater samples, it could be a useful parameter in monitoring the improvement of the quality of a specific sample. However, by analyzing an adequate number of waste samples of the same type, even from different local contexts, the profile of the impact of each component of the multi-enzymatic sensor could be typical for specific types of waste. The IEF-index is proposed as a supplementary qualification score for wastewaters, in addition to the certification of the waste's conformity to legal requirements. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Application of reduced order modeling techniques to problems in heat conduction, isoelectric focusing and differential algebraic equations

    NASA Astrophysics Data System (ADS)

    Mathai, Pramod P.

    This thesis focuses on applying and augmenting 'Reduced Order Modeling' (ROM) techniques to large scale problems. ROM refers to the set of mathematical techniques that are used to reduce the computational expense of conventional modeling techniques, like finite element and finite difference methods, while minimizing the loss of accuracy that typically accompanies such a reduction. The first problem that we address pertains to the prediction of the level of heat dissipation in electronic and MEMS devices. With the ever decreasing feature sizes in electronic devices, and the accompanied rise in Joule heating, the electronics industry has, since the 1990s, identified a clear need for computationally cheap heat transfer modeling techniques that can be incorporated along with the electronic design process. We demonstrate how one can create reduced order models for simulating heat conduction in individual components that constitute an idealized electronic device. The reduced order models are created using Krylov Subspace Techniques (KST). We introduce a novel 'plug and play' approach, based on the small gain theorem in control theory, to interconnect these component reduced order models (according to the device architecture) to reliably and cheaply replicate whole device behavior. The final aim is to have this technique available commercially as a computationally cheap and reliable option that enables a designer to optimize for heat dissipation among competing VLSI architectures. Another place where model reduction is crucial to better design is Isoelectric Focusing (IEF) - the second problem in this thesis - which is a popular technique that is used to separate minute amounts of proteins from the other constituents that are present in a typical biological tissue sample. Fundamental questions about how to design IEF experiments still remain because of the high dimensional and highly nonlinear nature of the differential equations that describe the IEF process as well as the uncertainty in the parameters of the differential equations. There is a clear need to design better experiments for IEF without the current overhead of expensive chemicals and labor. We show how with a simpler modeling of the underlying chemistry, we can still achieve the accuracy that has been achieved in existing literature for modeling small ranges of pH (hydrogen ion concentration) in IEF, but with far less computational time. We investigate a further reduction of time by modeling the IEF problem using the Proper Orthogonal Decomposition (POD) technique and show why POD may not be sufficient due to the underlying constraints. The final problem that we address in this thesis addresses a certain class of dynamics with high stiffness - in particular, differential algebraic equations. With the help of simple examples, we show how the traditional POD procedure will fail to model certain high stiffness problems due to a particular behavior of the vector field which we will denote as twist. We further show how a novel augmentation to the traditional POD algorithm can model-reduce problems with twist in a computationally cheap manner without any additional data requirements.

  15. Lessons from implementing a combined workflow-informatics system for diabetes management.

    PubMed

    Zai, Adrian H; Grant, Richard W; Estey, Greg; Lester, William T; Andrews, Carl T; Yee, Ronnie; Mort, Elizabeth; Chueh, Henry C

    2008-01-01

    Shortcomings surrounding the care of patients with diabetes have been attributed largely to a fragmented, disorganized, and duplicative health care system that focuses more on acute conditions and complications than on managing chronic disease. To address these shortcomings, we developed a diabetes registry population management application to change the way our staff manages patients with diabetes. Use of this new application has helped us coordinate the responsibilities for intervening and monitoring patients in the registry among different users. Our experiences using this combined workflow-informatics intervention system suggest that integrating a chronic disease registry into clinical workflow for the treatment of chronic conditions creates a useful and efficient tool for managing disease.

  16. Effects of high intensity exercise on isoelectric profiles and SDS-PAGE mobility of erythropoietin.

    PubMed

    Voss, S; Lüdke, A; Romberg, S; Schänzer, E; Flenker, U; deMarees, M; Achtzehn, S; Mester, J; Schänzer, W

    2010-06-01

    Exercise induced proteinuria is a common phenomenon in high performance sports. Based on the appearance of so called "effort urines" in routine doping analysis the purpose of this study was to investigate the influence of exercise induced proteinuria on IEF profiles and SDS-PAGE relative mobility values (rMVs) of endogenous human erythropoietin (EPO). Twenty healthy subjects performed cycle-ergometer exercise until exhaustion. VO (2)max, blood lactate, urinary proteins and urinary creatinine were analysed to evaluate the exercise performance and proteinuria. IEF and SDS-PAGE analyses were performed to test for differences in electrophoretic behaviour of the endogenous EPO before and after exercise. All subjects showed increased levels of protein/creatinine ratio after performance (8.8+/-5.2-26.1+/-14.4). IEF analysis demonstrated an elevation of the relative amount of basic band areas (13.9+/-11.3-36.4+/-12.6). Using SDS-PAGE analysis we observed a decrease in rMVs after exercise and no shift in direction of the recombinant human EPO (rhEPO) region (0.543+/-0.013-0.535+/-0.012). Following identification criteria of the World Anti Doping Agency (WADA) all samples were negative. The implementation of the SDS-PAGE method represents a good solution to distinguish between results influenced by so called effort urines and results of rhEPO abuse. Thus this method can be used to confirm adverse analytical findings.

  17. Digitization workflows for flat sheets and packets of plants, algae, and fungi1

    PubMed Central

    Nelson, Gil; Sweeney, Patrick; Wallace, Lisa E.; Rabeler, Richard K.; Allard, Dorothy; Brown, Herrick; Carter, J. Richard; Denslow, Michael W.; Ellwood, Elizabeth R.; Germain-Aubrey, Charlotte C.; Gilbert, Ed; Gillespie, Emily; Goertzen, Leslie R.; Legler, Ben; Marchant, D. Blaine; Marsico, Travis D.; Morris, Ashley B.; Murrell, Zack; Nazaire, Mare; Neefus, Chris; Oberreiter, Shanna; Paul, Deborah; Ruhfel, Brad R.; Sasek, Thomas; Shaw, Joey; Soltis, Pamela S.; Watson, Kimberly; Weeks, Andrea; Mast, Austin R.

    2015-01-01

    Effective workflows are essential components in the digitization of biodiversity specimen collections. To date, no comprehensive, community-vetted workflows have been published for digitizing flat sheets and packets of plants, algae, and fungi, even though latest estimates suggest that only 33% of herbarium specimens have been digitally transcribed, 54% of herbaria use a specimen database, and 24% are imaging specimens. In 2012, iDigBio, the U.S. National Science Foundation’s (NSF) coordinating center and national resource for the digitization of public, nonfederal U.S. collections, launched several working groups to address this deficiency. Here, we report the development of 14 workflow modules with 7–36 tasks each. These workflows represent the combined work of approximately 35 curators, directors, and collections managers representing more than 30 herbaria, including 15 NSF-supported plant-related Thematic Collections Networks and collaboratives. The workflows are provided for download as Portable Document Format (PDF) and Microsoft Word files. Customization of these workflows for specific institutional implementation is encouraged. PMID:26421256

  18. Homogeneous immunoglobulins in the sera of lung carcinoma patients receiving cytotoxic chemotherapy--detection with the use of isoelectric focusing and immunoblotting.

    PubMed Central

    Haas, H; Lange, A; Schlaak, M

    1987-01-01

    Using isoelectric focusing (IEF) with immunoblotting, we have analysed serum immunoglobulins of 15 lung cancer patients on cytotoxic chemotherapy. In five of the patients homogeneous immunoglobulins were found which appeared between 9 and 18 months after beginning of treatment and were monoclonal in two and oligoclonal in three cases. These abnormalities were only partially shown by zonal electrophoresis with immunofixation and not detected by immune electrophoresis. Examination of 10 normal and 10 myeloma sera by the three techniques in parallel confirmed the competence and sensitivity of IEF with immunoblotting in detecting homogeneous immunoglobulins. Thus, this method provides a valuable tool for investigating an abnormal regulation of the immunoglobulin synthesis. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:3325203

  19. High-volume workflow management in the ITN/FBI system

    NASA Astrophysics Data System (ADS)

    Paulson, Thomas L.

    1997-02-01

    The Identification Tasking and Networking (ITN) Federal Bureau of Investigation system will manage the processing of more than 70,000 submissions per day. The workflow manager controls the routing of each submission through a combination of automated and manual processing steps whose exact sequence is dynamically determined by the results at each step. For most submissions, one or more of the steps involve the visual comparison of fingerprint images. The ITN workflow manager is implemented within a scaleable client/server architecture. The paper describes the key aspects of the ITN workflow manager design which allow the high volume of daily processing to be successfully accomplished.

  20. chemalot and chemalot_knime: Command line programs as workflow tools for drug discovery.

    PubMed

    Lee, Man-Ling; Aliagas, Ignacio; Feng, Jianwen A; Gabriel, Thomas; O'Donnell, T J; Sellers, Benjamin D; Wiswedel, Bernd; Gobbi, Alberto

    2017-06-12

    Analyzing files containing chemical information is at the core of cheminformatics. Each analysis may require a unique workflow. This paper describes the chemalot and chemalot_knime open source packages. Chemalot is a set of command line programs with a wide range of functionalities for cheminformatics. The chemalot_knime package allows command line programs that read and write SD files from stdin and to stdout to be wrapped into KNIME nodes. The combination of chemalot and chemalot_knime not only facilitates the compilation and maintenance of sequences of command line programs but also allows KNIME workflows to take advantage of the compute power of a LINUX cluster. Use of the command line programs is demonstrated in three different workflow examples: (1) A workflow to create a data file with project-relevant data for structure-activity or property analysis and other type of investigations, (2) The creation of a quantitative structure-property-relationship model using the command line programs via KNIME nodes, and (3) The analysis of strain energy in small molecule ligand conformations from the Protein Data Bank database. The chemalot and chemalot_knime packages provide lightweight and powerful tools for many tasks in cheminformatics. They are easily integrated with other open source and commercial command line tools and can be combined to build new and even more powerful tools. The chemalot_knime package facilitates the generation and maintenance of user-defined command line workflows, taking advantage of the graphical design capabilities in KNIME. Graphical abstract Example KNIME workflow with chemalot nodes and the corresponding command line pipe.

  1. Evaluation of relative potencies for in vitro transactivation of the baikal seal aryl hydrocarbon receptor by dioxin-like compounds.

    PubMed

    Kim, Eun-Young; Suda, Tomoko; Tanabe, Shinsuke; Batoev, Valeriy B; Petrov, Evgeny A; Iwata, Hisato

    2011-02-15

    To evaluate the sensitivity and responses to dioxins and related compounds (DRCs) via aryl hydrocarbon receptor (AHR) in Baikal seals (Pusa sibirica), we constructed an in vitro reporter gene assay system. Baikal seal AHR (BS AHR) expression plasmid and a reporter plasmid containing CYP1A1 promoter were transfected in COS-7 cells. The cells were treated with six representative congeners, and dose-dependent responses were obtained for all the congeners. EC50 values of 2,3,7,8-TCDD, 1,2,3,7,8-PeCDD, 2,3,7,8-TCDF, 2,3,4,7,8-PeCDF, and PCB126 were found to be 0.021, 1.8, 0.16, 2.4, and 2.5 nM, respectively. As the response did not reach the maximal plateau, EC50 value for PCB118 could not be obtained. The TCDD-EC50 for BS AHR was as high as that for dioxin sensitive C57BL/6 mouse AHR. The in vitro dose responses were further analyzed following an established systematic framework and multiple (20, 50, and 80%) relative potencies (REPs) to the maximum TCDD response. The estimates revealed lower REP ranges (20-80%) of PeCDD and PeCDF for BS AHR than for mouse AHR. Average of the 20, 50, and 80% REPs was designated as Baikal seal specific TCDD induction equivalency factor (BS IEF). The BS IEFs of PeCDD, TCDF, PeCDF, PCB126, and PCB118 were estimated as 0.010, 0.018, 0.0078, 0.0059, and 0.00010, respectively. Total TCDD induction equivalents (IEQs) that were calculated using BS IEFs and hepatic concentrations in wild Baikal seals corresponded to only 12-31% of 2005 WHO TEF-derived TEQs. Nevertheless, about 50% of Baikal seals accumulated IEQs over the TCDD-EC50 obtained in this study. This assessment was supported by the enhanced CYP1A1 mRNA expression found in 50% of the specimens contaminated over the TCDD-EC50. These findings suggest that the IEFs proposed from this in vitro assay could be used to predict AHR-mediated responses in wild seals.

  2. Recent advances in combination of capillary electrophoresis with mass spectrometry: methodology and theory.

    PubMed

    Klepárník, Karel

    2015-01-01

    This review focuses on the latest development of microseparation electromigration methods in capillaries and microfluidic devices with MS detection and identification. A wide selection of 183 relevant articles covers the literature published from June 2012 till May 2014 as a continuation of the review article on the same topic by Kleparnik [Electrophoresis 2013, 34, 70-86]. Special attention is paid to the new improvements in the theory of instrumentation and methodology of MS interfacing with capillary versions of zone electrophoresis, ITP, and IEF. Ionization methods in MS include ESI, MALDI, and ICP. Although the main attention is paid to the development of instrumentation and methodology, representative examples illustrate also applications in the proteomics, glycomics, metabolomics, biomarker research, forensics, pharmacology, food analysis, and single-cell analysis. The combinations of MS with capillary versions of electrochromatography and micellar electrokinetic chromatography are not included. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. ISO Mid-Infrared Spectra of Reflection Nebulae

    NASA Technical Reports Server (NTRS)

    Werner, M.; Uchida, K.; Sellgren, K.; Houdashelt, M.

    1999-01-01

    Our goal is to test predictions of models attributing the IEFs to polycyclic aromatic hydrocarbons (PAHs). Interstellar models predict PAHs change from singly ionized to neutral as the UV intensity, Go, decreases.

  4. Structural characteristics of Tla products

    PubMed Central

    1985-01-01

    Biochemical study of thymus leukemia antigen (TL) from thymocytes of various Tla genotypes and from leukemia cells revealed features that, given present evidence, are peculiar to TL among class I products of the H-2:Qa:Tla region of chromosome 17. Sodium dodecyl sulfate- polyacrylamide gel electrophoresis (SDS-PAGE) of TL from thymocytes of all TL+ mouse strains, precipitated by anti-TL antiserum or monoclonal antibodies, showed two closely migrating bands of equal intensity in the heavy (H) chain position (45-50,000 mol wt). Comparison of these two bands by two-dimensional isoelectric focusing (2D IEF)-SDS-PAGE and 2D chymotryptic peptide mapping showed no differences indicative of protein dissimilarity. Thus, the two components of the H chain doublet may differ only in a feature of glycosylation that does not affect charge. The two leukemias studied gave only a single band in the H chain position. On 2D peptide mapping and 2D IEF-SDS-PAGE, the patterns for TL of Tlaa and Tlae thymocytes, which are closely related serologically, were broadly similar, but clearly different from the pattern typical of Tlac and Tlad thymocytes. 2D peptide maps of TL from Tlaa thymocytes and Tlaa leukemia cells did not differ. Leukemia cells of Tlab origin (thymocytes TL-) gave 2D peptide and 2D IEF-SDS-PAGE patterns of a third type. With the exception of Tlaa, thymocytes of TL+ mice yielded additional TL products of higher molecular weight than the TL H chain. PMID:3875681

  5. Linear response of field-aligned currents to the interplanetary electric field

    NASA Astrophysics Data System (ADS)

    Weimer, D. R.; Edwards, T. R.; Olsen, Nils

    2017-08-01

    Many studies that have shown that the ionospheric, polar cap electric potentials (PCEPs) exhibit a "saturation" behavior in response to the level of the driving by the solar wind. As the magnitudes of the interplanetary magnetic field (IMF) and electric field (IEF) increase, the PCEP response is linear at low driving levels, followed with a rollover to a more constant level. While there are several different theoretical explanations for this behavior, so far, no direct observational evidence has existed to confirm any particular model. In most models of this saturation, the interaction of the field-aligned currents (FACs) with the solar wind/magnetosphere/ionosphere system has a role. As the FACs are more difficult to measure, their behavior in response to the level of the IEF has not been investigated as thoroughly. In order to resolve the question of whether or not the FAC also exhibit saturation, we have processed the magnetic field measurements from the Ørsted, CHAMP, and Swarm missions, spanning more than a decade. As the amount of current in each region needs to be known, a new technique is used to separate and sum the current by region, widely known as R0, R1, and R2. These totals are found separately for the dawnside and duskside. Results indicate that the total FAC has a response to the IEF that is highly linear, continuing to increase well beyond the level at which the electric potentials saturate. The currents within each region have similar behavior.

  6. P185-M Protein Identification and Validation of Results in Workflows that Integrate over Various Instruments, Datasets, Search Engines

    PubMed Central

    Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.

    2007-01-01

    Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.

  7. Modelling and analysis of workflow for lean supply chains

    NASA Astrophysics Data System (ADS)

    Ma, Jinping; Wang, Kanliang; Xu, Lida

    2011-11-01

    Cross-organisational workflow systems are a component of enterprise information systems which support collaborative business process among organisations in supply chain. Currently, the majority of workflow systems is developed in perspectives of information modelling without considering actual requirements of supply chain management. In this article, we focus on the modelling and analysis of the cross-organisational workflow systems in the context of lean supply chain (LSC) using Petri nets. First, the article describes the assumed conditions of cross-organisation workflow net according to the idea of LSC and then discusses the standardisation of collaborating business process between organisations in the context of LSC. Second, the concept of labelled time Petri nets (LTPNs) is defined through combining labelled Petri nets with time Petri nets, and the concept of labelled time workflow nets (LTWNs) is also defined based on LTPNs. Cross-organisational labelled time workflow nets (CLTWNs) is then defined based on LTWNs. Third, the article proposes the notion of OR-silent CLTWNS and a verifying approach to the soundness of LTWNs and CLTWNs. Finally, this article illustrates how to use the proposed method by a simple example. The purpose of this research is to establish a formal method of modelling and analysis of workflow systems for LSC. This study initiates a new perspective of research on cross-organisational workflow management and promotes operation management of LSC in real world settings.

  8. Fabrication of Zirconia-Reinforced Lithium Silicate Ceramic Restorations Using a Complete Digital Workflow

    PubMed Central

    Rödiger, Matthias; Ziebolz, Dirk; Schmidt, Anne-Kathrin

    2015-01-01

    This case report describes the fabrication of monolithic all-ceramic restorations using zirconia-reinforced lithium silicate (ZLS) ceramics. The use of powder-free intraoral scanner, generative fabrication technology of the working model, and CAD/CAM of the restorations in the dental laboratory allows a completely digitized workflow. The newly introduced ZLS ceramics offer a unique combination of fracture strength (>420 MPa), excellent optical properties, and optimum polishing characteristics, thus making them an interesting material option for monolithic restorations in the digital workflow. PMID:26509088

  9. Principle of radial transport in low temperature annular plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yunchao, E-mail: yunchao.zhang@anu.edu.au; Charles, Christine; Boswell, Rod

    2015-07-15

    Radial transport in low temperature annular plasmas is investigated theoretically in this paper. The electrons are assumed to be in quasi-equilibrium due to their high temperature and light inertial mass. The ions are not in equilibrium and their transport is analyzed in three different situations: a low electric field (LEF) model, an intermediate electric field (IEF) model, and a high electric field (HEF) model. The universal IEF model smoothly connects the LEF and HEF models at their respective electric field strength limits and gives more accurate results of the ion mobility coefficient and effective ion temperature over the entire electricmore » field strength range. Annular modelling is applied to an argon plasma and numerical results of the density peak position, the annular boundary loss coefficient and the electron temperature are given as functions of the annular geometry ratio and Paschen number.« less

  10. Charge heterogeneity of rat pituitary prolactin in relation to the estrous cycle, gonadectomy, and estrogen treatment.

    PubMed

    Ishikawa, J; Wakabayashi, K; Igarashi, M

    1985-10-01

    Pituitary samples were obtained from female rats at various stages of the estrous cycle, and from intact male and gonadectomized rats with and without estradiol treatment. The pituitary extracts with 60% EtOH pH 9.5, were fractionated by preparative isoelectric focusing (IEF), and immunoreactive prolactin (IR-PRL) was measured by RIA. Three types of IR-PRL molecular species were found in these IEF profiles. The first type (species A) was consistently found in an area of pH 4.5-5.4, and consisted of two main subspecies with pls 5.0 (Al) and 5.25 (A2). Species A occupied most part of pituitary IR-PRL in males, gonadectomized animals, and in females in a basal state such as diestrus (D) II 17:00. Species A was also found exclusively in the serum at proestrus (PE) 19:00. The amounts of species A decreased notably when the secretion became active from PE 15:00 to 22:00, then increased at estrus (E) 6:00 and 10:00 when the second type (species B), which was found in the area of pH 5.4-6.8 only in trace amounts at basal states, increased markedly. Species B decreased again at E 17:00, while species A fully recovered. Species B also increased when PRL biosynthesis was stimulated by estradiol in intact male and gonadectomized rats. These findings indicate that species A must be the storage and secretory type of IR-PR, and that species B must be IR-PRL in the biosynthetic process which is to be finally converted into species A. A third type (species C) was found in a region of pH 3.5-4.5 in the IEF profiles of gonadectomized animals. This species is possibly IR-PRL molecules under degradation. When the pituitary was extracted serially with 0.25 M ammonium sulfate pH 5.5 (fraction AMS) first, then with 60% EtOH pH 9.5 (fraction ET), fraction AMS contained mostly species B and C, while fraction ET contained species A almost exclusively. The results obtained with this differential extraction roughly coincided with IEF data, though some disagreements were observed.

  11. An ontology-based framework for bioinformatics workflows.

    PubMed

    Digiampietri, Luciano A; Perez-Alcazar, Jose de J; Medeiros, Claudia Bauzer

    2007-01-01

    The proliferation of bioinformatics activities brings new challenges - how to understand and organise these resources, how to exchange and reuse successful experimental procedures, and to provide interoperability among data and tools. This paper describes an effort toward these directions. It is based on combining research on ontology management, AI and scientific workflows to design, reuse and annotate bioinformatics experiments. The resulting framework supports automatic or interactive composition of tasks based on AI planning techniques and takes advantage of ontologies to support the specification and annotation of bioinformatics workflows. We validate our proposal with a prototype running on real data.

  12. Workflow and Electronic Health Records in Small Medical Practices

    PubMed Central

    Ramaiah, Mala; Subrahmanian, Eswaran; Sriram, Ram D; Lide, Bettijoyce B

    2012-01-01

    This paper analyzes the workflow and implementation of electronic health record (EHR) systems across different functions in small physician offices. We characterize the differences in the offices based on the levels of computerization in terms of workflow, sources of time delay, and barriers to using EHR systems to support the entire workflow. The study was based on a combination of questionnaires, interviews, in situ observations, and data collection efforts. This study was not intended to be a full-scale time-and-motion study with precise measurements but was intended to provide an overview of the potential sources of delays while performing office tasks. The study follows an interpretive model of case studies rather than a large-sample statistical survey of practices. To identify time-consuming tasks, workflow maps were created based on the aggregated data from the offices. The results from the study show that specialty physicians are more favorable toward adopting EHR systems than primary care physicians are. The barriers to adoption of EHR systems by primary care physicians can be attributed to the complex workflows that exist in primary care physician offices, leading to nonstandardized workflow structures and practices. Also, primary care physicians would benefit more from EHR systems if the systems could interact with external entities. PMID:22737096

  13. First field demonstration of cloud datacenter workflow automation employing dynamic optical transport network resources under OpenStack and OpenFlow orchestration.

    PubMed

    Szyrkowiec, Thomas; Autenrieth, Achim; Gunning, Paul; Wright, Paul; Lord, Andrew; Elbers, Jörg-Peter; Lumb, Alan

    2014-02-10

    For the first time, we demonstrate the orchestration of elastic datacenter and inter-datacenter transport network resources using a combination of OpenStack and OpenFlow. Programmatic control allows a datacenter operator to dynamically request optical lightpaths from a transport network operator to accommodate rapid changes of inter-datacenter workflows.

  14. Support for Taverna workflows in the VPH-Share cloud platform.

    PubMed

    Kasztelnik, Marek; Coto, Ernesto; Bubak, Marian; Malawski, Maciej; Nowakowski, Piotr; Arenas, Juan; Saglimbeni, Alfredo; Testi, Debora; Frangi, Alejandro F

    2017-07-01

    To address the increasing need for collaborative endeavours within the Virtual Physiological Human (VPH) community, the VPH-Share collaborative cloud platform allows researchers to expose and share sequences of complex biomedical processing tasks in the form of computational workflows. The Taverna Workflow System is a very popular tool for orchestrating complex biomedical & bioinformatics processing tasks in the VPH community. This paper describes the VPH-Share components that support the building and execution of Taverna workflows, and explains how they interact with other VPH-Share components to improve the capabilities of the VPH-Share platform. Taverna workflow support is delivered by the Atmosphere cloud management platform and the VPH-Share Taverna plugin. These components are explained in detail, along with the two main procedures that were developed to enable this seamless integration: workflow composition and execution. 1) Seamless integration of VPH-Share with other components and systems. 2) Extended range of different tools for workflows. 3) Successful integration of scientific workflows from other VPH projects. 4) Execution speed improvement for medical applications. The presented workflow integration provides VPH-Share users with a wide range of different possibilities to compose and execute workflows, such as desktop or online composition, online batch execution, multithreading, remote execution, etc. The specific advantages of each supported tool are presented, as are the roles of Atmosphere and the VPH-Share plugin within the VPH-Share project. The combination of the VPH-Share plugin and Atmosphere engenders the VPH-Share infrastructure with far more flexible, powerful and usable capabilities for the VPH-Share community. As both components can continue to evolve and improve independently, we acknowledge that further improvements are still to be developed and will be described. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. DIaaS: Data-Intensive workflows as a service - Enabling easy composition and deployment of data-intensive workflows on Virtual Research Environments

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.

  16. Combining transrectal ultrasound and CT for image-guided adaptive brachytherapy of cervical cancer: Proof of concept.

    PubMed

    Nesvacil, Nicole; Schmid, Maximilian P; Pötter, Richard; Kronreif, Gernot; Kirisits, Christian

    To investigate the feasibility of a treatment planning workflow for three-dimensional image-guided cervix cancer brachytherapy, combining volumetric transrectal ultrasound (TRUS) for target definition with CT for dose optimization to organs at risk (OARs), for settings with no access to MRI. A workflow for TRUS/CT-based volumetric treatment planning was developed, based on a customized system including ultrasound probe, stepper unit, and software for image volume acquisition. A full TRUS/CT-based workflow was simulated in a clinical case and compared with MR- or CT-only delineation. High-risk clinical target volume was delineated on TRUS, and OARs were delineated on CT. Manually defined tandem/ring applicator positions on TRUS and CT were used as a reference for rigid registration of the image volumes. Treatment plan optimization for TRUS target and CT organ volumes was performed and compared to MRI and CT target contours. TRUS/CT-based contouring, applicator reconstruction, image fusion, and treatment planning were feasible, and the full workflow could be successfully demonstrated. The TRUS/CT plan fulfilled all clinical planning aims. Dose-volume histogram evaluation of the TRUS/CT-optimized plan (high-risk clinical target volume D 90 , OARs D 2cm³ for) on different image modalities showed good agreement between dose values reported for TRUS/CT and MRI-only reference contours and large deviations for CT-only target parameters. A TRUS/CT-based workflow for full three-dimensional image-guided cervix brachytherapy treatment planning seems feasible and may be clinically comparable to MRI-based treatment planning. Further development to solve challenges with applicator definition in the TRUS volume is required before systematic applicability of this workflow. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  17. Scientific Workflows + Provenance = Better (Meta-)Data Management

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.; Cuevas-Vicenttín, V.; Missier, P.; Dey, S.; Kianmajd, P.; Wei, Y.; Koop, D.; Chirigati, F.; Altintas, I.; Belhajjame, K.; Bowers, S.

    2013-12-01

    The origin and processing history of an artifact is known as its provenance. Data provenance is an important form of metadata that explains how a particular data product came about, e.g., how and when it was derived in a computational process, which parameter settings and input data were used, etc. Provenance information provides transparency and helps to explain and interpret data products. Other common uses and applications of provenance include quality control, data curation, result debugging, and more generally, 'reproducible science'. Scientific workflow systems (e.g. Kepler, Taverna, VisTrails, and others) provide controlled environments for developing computational pipelines with built-in provenance support. Workflow results can then be explained in terms of workflow steps, parameter settings, input data, etc. using provenance that is automatically captured by the system. Scientific workflows themselves provide a user-friendly abstraction of the computational process and are thus a form of ('prospective') provenance in their own right. The full potential of provenance information is realized when combining workflow-level information (prospective provenance) with trace-level information (retrospective provenance). To this end, the DataONE Provenance Working Group (ProvWG) has developed an extension of the W3C PROV standard, called D-PROV. Whereas PROV provides a 'least common denominator' for exchanging and integrating provenance information, D-PROV adds new 'observables' that described workflow-level information (e.g., the functional steps in a pipeline), as well as workflow-specific trace-level information ( timestamps for each workflow step executed, the inputs and outputs used, etc.) Using examples, we will demonstrate how the combination of prospective and retrospective provenance provides added value in managing scientific data. The DataONE ProvWG is also developing tools based on D-PROV that allow scientists to get more mileage from provenance metadata. DataONE is a federation of member nodes that store data and metadata for discovery and access. By enriching metadata with provenance information, search and reuse of data is enhanced, and the 'social life' of data (being the product of many workflow runs, different people, etc.) is revealed. We are currently prototyping a provenance repository (PBase) to demonstrate what can be achieved with advanced provenance queries. The ProvExplorer and ProPub tools support advanced ad-hoc querying and visualization of provenance as well as customized provenance publications (e.g., to address privacy issues, or to focus provenance to relevant details). In a parallel line of work, we are exploring ways to add provenance support to widely-used scripting platforms (e.g. R and Python) and then expose that information via D-PROV.

  18. Metaworkflows and Workflow Interoperability for Heliophysics

    NASA Astrophysics Data System (ADS)

    Pierantoni, Gabriele; Carley, Eoin P.

    2014-06-01

    Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They- implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WSPGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.

  19. Quantitative analysis of glycation and its impact on antigen binding

    PubMed Central

    Mo, Jingjie; Yan, Qingrong; Sokolowska, Izabela; Lewis, Michael J.; Hu, Ping

    2018-01-01

    ABSTRACT Glycation has been observed in antibody therapeutics manufactured by the fed-batch fermentation process. It not only increases the heterogeneity of antibodies, but also potentially affects product safety and efficacy. In this study, non-glycated and glycated fractions enriched from a monoclonal antibody (mAb1) as well as glucose-stressed mAb1 were characterized using a variety of biochemical, biophysical and biological assays to determine the effects of glycation on the structure and function of mAb1. Glycation was detected at multiple lysine residues and reduced the antigen binding activity of mAb1. Heavy chain Lys100, which is located in the complementary-determining region of mAb1, had the highest levels of glycation in both stressed and unstressed samples, and glycation of this residue was likely responsible for the loss of antigen binding based on hydrogen/deuterium exchange mass spectrometry analysis. Peptide mapping and intact liquid chromatography-mass spectrometry (LC-MS) can both be used to monitor the glycation levels. Peptide mapping provides site specific glycation results, while intact LC-MS is a quicker and simpler method to quantitate the total glycation levels and is more useful for routine testing. Capillary isoelectric focusing (cIEF) can also be used to monitor glycation because glycation induces an acidic shift in the cIEF profile. As expected, total glycation measured by intact LC-MS correlated very well with the percentage of total acidic peaks or main peak measured by cIEF. In summary, we demonstrated that glycation can affect the function of a representative IgG1 mAb. The analytical characterization, as described here, should be generally applicable for other therapeutic mAbs. PMID:29436927

  20. EF-2DE Analysis and Protein Identification

    USDA-ARS?s Scientific Manuscript database

    Isoelectric focusing followed by SDS-PAGE (IEF-2DE) separates proteins in a two-dimensional matrix of protein pI (Protein Isoelectric Point) and molecular weight (MW). The technique is particularly useful to distinguish protein isoforms (Radwan et al., 2012) and proteins that contain post-translatio...

  1. The Insurance Educator. Volume VI. 1997.

    ERIC Educational Resources Information Center

    Insurance Educator, 1997

    1997-01-01

    This Insurance Education Foundation (IEF) newsletter provides secondary educators with a greater knowledge of insurance and access to teaching materials. It also provides students with insurance career information. The newsletter is intended for secondary educators who teach insurance in any subject. Substantive articles contained in this issue…

  2. Explicitly Representing the Solvation Shell in Continuum Solvent Calculations

    PubMed Central

    Svendsen, Hallvard F.; Merz, Kenneth M.

    2009-01-01

    A method is presented to explicitly represent the first solvation shell in continuum solvation calculations. Initial solvation shell geometries were generated with classical molecular dynamics simulations. Clusters consisting of solute and 5 solvent molecules were fully relaxed in quantum mechanical calculations. The free energy of solvation of the solute was calculated from the free energy of formation of the cluster and the solvation free energy of the cluster calculated with continuum solvation models. The method has been implemented with two continuum solvation models, a Poisson-Boltzmann model and the IEF-PCM model. Calculations were carried out for a set of 60 ionic species. Implemented with the Poisson-Boltzmann model the method gave an unsigned average error of 2.1 kcal/mol and a RMSD of 2.6 kcal/mol for anions, for cations the unsigned average error was 2.8 kcal/mol and the RMSD 3.9 kcal/mol. Similar results were obtained with the IEF-PCM model. PMID:19425558

  3. Molecular classification of fatty liver by high-throughput profiling of protein post-translational modifications.

    PubMed

    Urasaki, Yasuyo; Fiscus, Ronald R; Le, Thuc T

    2016-04-01

    We describe an alternative approach to classifying fatty liver by profiling protein post-translational modifications (PTMs) with high-throughput capillary isoelectric focusing (cIEF) immunoassays. Four strains of mice were studied, with fatty livers induced by different causes, such as ageing, genetic mutation, acute drug usage, and high-fat diet. Nutrient-sensitive PTMs of a panel of 12 liver metabolic and signalling proteins were simultaneously evaluated with cIEF immunoassays, using nanograms of total cellular protein per assay. Changes to liver protein acetylation, phosphorylation, and O-N-acetylglucosamine glycosylation were quantified and compared between normal and diseased states. Fatty liver tissues could be distinguished from one another by distinctive protein PTM profiles. Fatty liver is currently classified by morphological assessment of lipid droplets, without identifying the underlying molecular causes. In contrast, high-throughput profiling of protein PTMs has the potential to provide molecular classification of fatty liver. Copyright © 2016 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  4. Penetration electric fields: A Volland Stern approach

    NASA Astrophysics Data System (ADS)

    Burke, William J.

    2007-07-01

    This paper reformulates the Volland Stern model, separating contributions from corotation and convection to predict electric field penetration of the inner magnetosphere using data from the Advanced Composition Explorer (ACE) satellite. In the absence of shielding, the model electric field is EVS=ΦPC/2LYRE, where ΦPC is the polar cap potential and 2LYRE is the width of the magnetosphere along the dawn dusk meridian. ΦPC is estimated from the interplanetary electric field (IEF) and the dynamic pressure of the solar wind (PSW); values of LY were approximated using PSW and simple force-balance considerations. ACE measurements on 16 17 April 2002 were then used to calculate EVS for comparison with the eastward electric field component (EJφ) detected by the incoherent scatter radar at Jicamarca, Peru. While the interplanetary magnetic field (IMF) was southward, the model predicted observed ratios of EVS/IEF. During intervals of northward IMF, EJφ turned westward suggesting that a northward IMF BZ system of field-aligned currents affected the electrodynamics of the dayside ionosphere on rapid time scales.

  5. Isoelectric focusing in space

    NASA Technical Reports Server (NTRS)

    Bier, M.; Egen, N. B.; Mosher, R. A.; Twitty, G. E.

    1982-01-01

    The potential of space electrophoresis is conditioned by the fact that all electrophoretic techniques require the suppression of gravity-caused convection. Isoelectric focusing (IEF) is a powerful variant of electrophoresis, in which amphoteric substances are separated in a pH gradient according to their isoelectric points. A new apparatus for large scale IEF, utilizing a recycling principle, has been developed. In the ground-based prototype, laminar flow is provided by a series of parallel filter elements. The operation of the apparatus is monitored by an automated array of pH and ultraviolet absorption sensors under control of a desk-top computer. The apparatus has proven to be useful for the purification of a variety of enzymes, snake venom proteins, peptide hormones, and other biologicals, including interferon produced by genetic engineering techniques. In planning for a possible space apparatus, a crucial question regarding electroosmosis needs to be addressed To solve this problem, simple focusing test modules are planned for inclusion in an early Shuttle flight.

  6. Next-generation sequencing meets genetic diagnostics: development of a comprehensive workflow for the analysis of BRCA1 and BRCA2 genes

    PubMed Central

    Feliubadaló, Lídia; Lopez-Doriga, Adriana; Castellsagué, Ester; del Valle, Jesús; Menéndez, Mireia; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Gómez, Carolina; Campos, Olga; Pineda, Marta; González, Sara; Moreno, Victor; Brunet, Joan; Blanco, Ignacio; Serra, Eduard; Capellá, Gabriel; Lázaro, Conxi

    2013-01-01

    Next-generation sequencing (NGS) is changing genetic diagnosis due to its huge sequencing capacity and cost-effectiveness. The aim of this study was to develop an NGS-based workflow for routine diagnostics for hereditary breast and ovarian cancer syndrome (HBOCS), to improve genetic testing for BRCA1 and BRCA2. A NGS-based workflow was designed using BRCA MASTR kit amplicon libraries followed by GS Junior pyrosequencing. Data analysis combined Variant Identification Pipeline freely available software and ad hoc R scripts, including a cascade of filters to generate coverage and variant calling reports. A BRCA homopolymer assay was performed in parallel. A research scheme was designed in two parts. A Training Set of 28 DNA samples containing 23 unique pathogenic mutations and 213 other variants (33 unique) was used. The workflow was validated in a set of 14 samples from HBOCS families in parallel with the current diagnostic workflow (Validation Set). The NGS-based workflow developed permitted the identification of all pathogenic mutations and genetic variants, including those located in or close to homopolymers. The use of NGS for detecting copy-number alterations was also investigated. The workflow meets the sensitivity and specificity requirements for the genetic diagnosis of HBOCS and improves on the cost-effectiveness of current approaches. PMID:23249957

  7. Create, run, share, publish, and reference your LC-MS, FIA-MS, GC-MS, and NMR data analysis workflows with the Workflow4Metabolomics 3.0 Galaxy online infrastructure for metabolomics.

    PubMed

    Guitton, Yann; Tremblay-Franco, Marie; Le Corguillé, Gildas; Martin, Jean-François; Pétéra, Mélanie; Roger-Mele, Pierrick; Delabrière, Alexis; Goulitquer, Sophie; Monsoor, Misharl; Duperier, Christophe; Canlet, Cécile; Servien, Rémi; Tardivel, Patrick; Caron, Christophe; Giacomoni, Franck; Thévenot, Etienne A

    2017-12-01

    Metabolomics is a key approach in modern functional genomics and systems biology. Due to the complexity of metabolomics data, the variety of experimental designs, and the multiplicity of bioinformatics tools, providing experimenters with a simple and efficient resource to conduct comprehensive and rigorous analysis of their data is of utmost importance. In 2014, we launched the Workflow4Metabolomics (W4M; http://workflow4metabolomics.org) online infrastructure for metabolomics built on the Galaxy environment, which offers user-friendly features to build and run data analysis workflows including preprocessing, statistical analysis, and annotation steps. Here we present the new W4M 3.0 release, which contains twice as many tools as the first version, and provides two features which are, to our knowledge, unique among online resources. First, data from the four major metabolomics technologies (i.e., LC-MS, FIA-MS, GC-MS, and NMR) can be analyzed on a single platform. By using three studies in human physiology, alga evolution, and animal toxicology, we demonstrate how the 40 available tools can be easily combined to address biological issues. Second, the full analysis (including the workflow, the parameter values, the input data and output results) can be referenced with a permanent digital object identifier (DOI). Publication of data analyses is of major importance for robust and reproducible science. Furthermore, the publicly shared workflows are of high-value for e-learning and training. The Workflow4Metabolomics 3.0 e-infrastructure thus not only offers a unique online environment for analysis of data from the main metabolomics technologies, but it is also the first reference repository for metabolomics workflows. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Parametric Workflow (BIM) for the Repair Construction of Traditional Historic Architecture in Taiwan

    NASA Astrophysics Data System (ADS)

    Ma, Y.-P.; Hsu, C. C.; Lin, M.-C.; Tsai, Z.-W.; Chen, J.-Y.

    2015-08-01

    In Taiwan, numerous existing traditional buildings are constructed with wooden structures, brick structures, and stone structures. This paper will focus on the Taiwan traditional historic architecture and target the traditional wooden structure buildings as the design proposition and process the BIM workflow for modeling complex wooden combination geometry, integrating with more traditional 2D documents and for visualizing repair construction assumptions within the 3D model representation. The goal of this article is to explore the current problems to overcome in wooden historic building conservation, and introduce the BIM technology in the case of conserving, documenting, managing, and creating full engineering drawings and information for effectively support historic conservation. Although BIM is mostly oriented to current construction praxis, there have been some attempts to investigate its applicability in historic conservation projects. This article also illustrates the importance and advantages of using BIM workflow in repair construction process, when comparing with generic workflow.

  9. A Comprehensive Workflow of Mass Spectrometry-Based Untargeted Metabolomics in Cancer Metabolic Biomarker Discovery Using Human Plasma and Urine

    PubMed Central

    Zou, Wei; She, Jianwen; Tolstikov, Vladimir V.

    2013-01-01

    Current available biomarkers lack sensitivity and/or specificity for early detection of cancer. To address this challenge, a robust and complete workflow for metabolic profiling and data mining is described in details. Three independent and complementary analytical techniques for metabolic profiling are applied: hydrophilic interaction liquid chromatography (HILIC–LC), reversed-phase liquid chromatography (RP–LC), and gas chromatography (GC). All three techniques are coupled to a mass spectrometer (MS) in the full scan acquisition mode, and both unsupervised and supervised methods are used for data mining. The univariate and multivariate feature selection are used to determine subsets of potentially discriminative predictors. These predictors are further identified by obtaining accurate masses and isotopic ratios using selected ion monitoring (SIM) and data-dependent MS/MS and/or accurate mass MSn ion tree scans utilizing high resolution MS. A list combining all of the identified potential biomarkers generated from different platforms and algorithms is used for pathway analysis. Such a workflow combining comprehensive metabolic profiling and advanced data mining techniques may provide a powerful approach for metabolic pathway analysis and biomarker discovery in cancer research. Two case studies with previous published data are adapted and included in the context to elucidate the application of the workflow. PMID:24958150

  10. Autonomous Metabolomics for Rapid Metabolite Identification in Global Profiling

    DOE PAGES

    Benton, H. Paul; Ivanisevic, Julijana; Mahieu, Nathaniel G.; ...

    2014-12-12

    An autonomous metabolomic workflow combining mass spectrometry analysis with tandem mass spectrometry data acquisition was designed to allow for simultaneous data processing and metabolite characterization. Although previously tandem mass spectrometry data have been generated on the fly, the experiments described herein combine this technology with the bioinformatic resources of XCMS and METLIN. We can analyze large profiling datasets and simultaneously obtain structural identifications, as a result of this unique integration. Furthermore, validation of the workflow on bacterial samples allowed the profiling on the order of a thousand metabolite features with simultaneous tandem mass spectra data acquisition. The tandem mass spectrometrymore » data acquisition enabled automatic search and matching against the METLIN tandem mass spectrometry database, shortening the current workflow from days to hours. Overall, the autonomous approach to untargeted metabolomics provides an efficient means of metabolomic profiling, and will ultimately allow the more rapid integration of comparative analyses, metabolite identification, and data analysis at a systems biology level.« less

  11. Object-based detection of vehicles using combined optical and elevation data

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang

    2018-02-01

    The detection of vehicles is an important and challenging topic that is relevant for many applications. In this work, we present a workflow that utilizes optical and elevation data to detect vehicles in remotely sensed urban data. This workflow consists of three consecutive stages: candidate identification, classification, and single vehicle extraction. Unlike in most previous approaches, fusion of both data sources is strongly pursued at all stages. While the first stage utilizes the fact that most man-made objects are rectangular in shape, the second and third stages employ machine learning techniques combined with specific features. The stages are designed to handle multiple sensor input, which results in a significant improvement. A detailed evaluation shows the benefits of our workflow, which includes hand-tailored features; even in comparison with classification approaches based on Convolutional Neural Networks, which are state of the art in computer vision, we could obtain a comparable or superior performance (F1 score of 0.96-0.94).

  12. A data-independent acquisition workflow for qualitative screening of new psychoactive substances in biological samples.

    PubMed

    Kinyua, Juliet; Negreira, Noelia; Ibáñez, María; Bijlsma, Lubertus; Hernández, Félix; Covaci, Adrian; van Nuijs, Alexander L N

    2015-11-01

    Identification of new psychoactive substances (NPS) is challenging. Developing targeted methods for their analysis can be difficult and costly due to their impermanence on the drug scene. Accurate-mass mass spectrometry (AMMS) using a quadrupole time-of-flight (QTOF) analyzer can be useful for wide-scope screening since it provides sensitive, full-spectrum MS data. Our article presents a qualitative screening workflow based on data-independent acquisition mode (all-ions MS/MS) on liquid chromatography (LC) coupled to QTOFMS for the detection and identification of NPS in biological matrices. The workflow combines and structures fundamentals of target and suspect screening data processing techniques in a structured algorithm. This allows the detection and tentative identification of NPS and their metabolites. We have applied the workflow to two actual case studies involving drug intoxications where we detected and confirmed the parent compounds ketamine, 25B-NBOMe, 25C-NBOMe, and several predicted phase I and II metabolites not previously reported in urine and serum samples. The screening workflow demonstrates the added value for the detection and identification of NPS in biological matrices.

  13. Structuring research methods and data with the research object model: genomics workflows as a case study.

    PubMed

    Hettne, Kristina M; Dharuri, Harish; Zhao, Jun; Wolstencroft, Katherine; Belhajjame, Khalid; Soiland-Reyes, Stian; Mina, Eleni; Thompson, Mark; Cruickshank, Don; Verdes-Montenegro, Lourdes; Garrido, Julian; de Roure, David; Corcho, Oscar; Klyne, Graham; van Schouwen, Reinout; 't Hoen, Peter A C; Bechhofer, Sean; Goble, Carole; Roos, Marco

    2014-01-01

    One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as "which particular data was input to a particular workflow to test a particular hypothesis?", and "which particular conclusions were drawn from a particular workflow?". Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well. The Research Object is available at http://www.myexperiment.org/packs/428 The Wf4Ever Research Object Model is available at http://wf4ever.github.io/ro.

  14. From the desktop to the grid: scalable bioinformatics via workflow conversion.

    PubMed

    de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver

    2016-03-12

    Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.

  15. Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks

    PubMed Central

    Bleser, Gabriele; Damen, Dima; Behera, Ardhendu; Hendeby, Gustaf; Mura, Katharina; Miezal, Markus; Gee, Andrew; Petersen, Nils; Maçães, Gustavo; Domingues, Hugo; Gorecky, Dominic; Almeida, Luis; Mayol-Cuevas, Walterio; Calway, Andrew; Cohn, Anthony G.; Hogg, David C.; Stricker, Didier

    2015-01-01

    Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user’s pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system. PMID:26126116

  16. Widening the adoption of workflows to include human and human-machine scientific processes

    NASA Astrophysics Data System (ADS)

    Salayandia, L.; Pinheiro da Silva, P.; Gates, A. Q.

    2010-12-01

    Scientific workflows capture knowledge in the form of technical recipes to access and manipulate data that help scientists manage and reuse established expertise to conduct their work. Libraries of scientific workflows are being created in particular fields, e.g., Bioinformatics, where combined with cyber-infrastructure environments that provide on-demand access to data and tools, result in powerful workbenches for scientists of those communities. The focus in these particular fields, however, has been more on automating rather than documenting scientific processes. As a result, technical barriers have impeded a wider adoption of scientific workflows by scientific communities that do not rely as heavily on cyber-infrastructure and computing environments. Semantic Abstract Workflows (SAWs) are introduced to widen the applicability of workflows as a tool to document scientific recipes or processes. SAWs intend to capture a scientists’ perspective about the process of how she or he would collect, filter, curate, and manipulate data to create the artifacts that are relevant to her/his work. In contrast, scientific workflows describe the process from the point of view of how technical methods and tools are used to conduct the work. By focusing on a higher level of abstraction that is closer to a scientist’s understanding, SAWs effectively capture the controlled vocabularies that reflect a particular scientific community, as well as the types of datasets and methods used in a particular domain. From there on, SAWs provide the flexibility to adapt to different environments to carry out the recipes or processes. These environments range from manual fieldwork to highly technical cyber-infrastructure environments, i.e., such as those already supported by scientific workflows. Two cases, one from Environmental Science and another from Geophysics, are presented as illustrative examples.

  17. Lessons from Implementing a Combined Workflow–Informatics System for Diabetes Management

    PubMed Central

    Zai, Adrian H.; Grant, Richard W.; Estey, Greg; Lester, William T.; Andrews, Carl T.; Yee, Ronnie; Mort, Elizabeth; Chueh, Henry C.

    2008-01-01

    Shortcomings surrounding the care of patients with diabetes have been attributed largely to a fragmented, disorganized, and duplicative health care system that focuses more on acute conditions and complications than on managing chronic disease. To address these shortcomings, we developed a diabetes registry population management application to change the way our staff manages patients with diabetes. Use of this new application has helped us coordinate the responsibilities for intervening and monitoring patients in the registry among different users. Our experiences using this combined workflow-informatics intervention system suggest that integrating a chronic disease registry into clinical workflow for the treatment of chronic conditions creates a useful and efficient tool for managing disease. PMID:18436907

  18. From Provenance Standards and Tools to Queries and Actionable Provenance

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.

    2017-12-01

    The W3C PROV standard provides a minimal core for sharing retrospective provenance information for scientific workflows and scripts. PROV extensions such as DataONE's ProvONE model are necessary for linking runtime observables in retrospective provenance records with conceptual-level prospective provenance information, i.e., workflow (or dataflow) graphs. Runtime provenance recorders, such as DataONE's RunManager for R, or noWorkflow for Python capture retrospective provenance automatically. YesWorkflow (YW) is a toolkit that allows researchers to declare high-level prospective provenance models of scripts via simple inline comments (YW-annotations), revealing the computational modules and dataflow dependencies in the script. By combining and linking both forms of provenance, important queries and use cases can be supported that neither provenance model can afford on its own. We present existing and emerging provenance tools developed for the DataONE and SKOPE (Synthesizing Knowledge of Past Environments) projects. We show how the different tools can be used individually and in combination to model, capture, share, query, and visualize provenance information. We also present challenges and opportunities for making provenance information more immediately actionable for the researchers who create it in the first place. We argue that such a shift towards "provenance-for-self" is necessary to accelerate the creation, sharing, and use of provenance in support of transparent, reproducible computational and data science.

  19. Conventions and workflows for using Situs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wriggers, Willy, E-mail: wriggers@biomachina.org

    2012-04-01

    Recent developments of the Situs software suite for multi-scale modeling are reviewed. Typical workflows and conventions encountered during processing of biophysical data from electron microscopy, tomography or small-angle X-ray scattering are described. Situs is a modular program package for the multi-scale modeling of atomic resolution structures and low-resolution biophysical data from electron microscopy, tomography or small-angle X-ray scattering. This article provides an overview of recent developments in the Situs package, with an emphasis on workflows and conventions that are important for practical applications. The modular design of the programs facilitates scripting in the bash shell that allows specific programs tomore » be combined in creative ways that go beyond the original intent of the developers. Several scripting-enabled functionalities, such as flexible transformations of data type, the use of symmetry constraints or the creation of two-dimensional projection images, are described. The processing of low-resolution biophysical maps in such workflows follows not only first principles but often relies on implicit conventions. Situs conventions related to map formats, resolution, correlation functions and feature detection are reviewed and summarized. The compatibility of the Situs workflow with CCP4 conventions and programs is discussed.« less

  20. Towards an intelligent hospital environment: OR of the future.

    PubMed

    Sutherland, Jeffrey V; van den Heuvel, Willem-Jan; Ganous, Tim; Burton, Matthew M; Kumar, Animesh

    2005-01-01

    Patients, providers, payers, and government demand more effective and efficient healthcare services, and the healthcare industry needs innovative ways to re-invent core processes. Business process reengineering (BPR) showed adopting new hospital information systems can leverage this transformation and workflow management technologies can automate process management. Our research indicates workflow technologies in healthcare require real time patient monitoring, detection of adverse events, and adaptive responses to breakdown in normal processes. Adaptive workflow systems are rarely implemented making current workflow implementations inappropriate for healthcare. The advent of evidence based medicine, guideline based practice, and better understanding of cognitive workflow combined with novel technologies including Radio Frequency Identification (RFID), mobile/wireless technologies, internet workflow, intelligent agents, and Service Oriented Architectures (SOA) opens up new and exciting ways of automating business processes. Total situational awareness of events, timing, and location of healthcare activities can generate self-organizing change in behaviors of humans and machines. A test bed of a novel approach towards continuous process management was designed for the new Weinburg Surgery Building at the University of Maryland Medical. Early results based on clinical process mapping and analysis of patient flow bottlenecks demonstrated 100% improvement in delivery of supplies and instruments at surgery start time. This work has been directly applied to the design of the DARPA Trauma Pod research program where robotic surgery will be performed on wounded soldiers on the battlefield.

  1. Precise Spatiotemporal Control of Optogenetic Activation Using an Acousto-Optic Device

    PubMed Central

    Guo, Yanmeng; Song, Peipei; Zhang, Xiaohui; Zeng, Shaoqun; Wang, Zuoren

    2011-01-01

    Light activation and inactivation of neurons by optogenetic techniques has emerged as an important tool for studying neural circuit function. To achieve a high resolution, new methods are being developed to selectively manipulate the activity of individual neurons. Here, we report that the combination of an acousto-optic device (AOD) and single-photon laser was used to achieve rapid and precise spatiotemporal control of light stimulation at multiple points in a neural circuit with millisecond time resolution. The performance of this system in activating ChIEF expressed on HEK 293 cells as well as cultured neurons was first evaluated, and the laser stimulation patterns were optimized. Next, the spatiotemporally selective manipulation of multiple neurons was achieved in a precise manner. Finally, we demonstrated the versatility of this high-resolution method in dissecting neural circuits both in the mouse cortical slice and the Drosophila brain in vivo. Taken together, our results show that the combination of AOD-assisted laser stimulation and optogenetic tools provides a flexible solution for manipulating neuronal activity at high efficiency and with high temporal precision. PMID:22174813

  2. A novel bicomponent hemolysin from Bacillus cereus.

    PubMed Central

    Beecher, D J; MacMillan, J D

    1990-01-01

    A procedure combining isoelectric focusing (Sephadex IEF) and fast protein liquid chromatography (Superose 12; Mono-Q) removed hemolytic activity (presumably a contaminant) from partially purified preparations of the multicomponent diarrheal enterotoxin produced by Bacillus cereus. However, when the separated fractions were recombined, hemolytic activity was restored, suggesting that hemolysis is a property of the enterotoxin components. Combined fractions exhibited a unique ring pattern in gel diffusion assays in blood agar. During diffusion of the hemolysin from an agar well, the erythrocytes closest to the well were not lysed initially. After diffusion, hemolysis was observed as a sharp ring beginning several millimeters away from the edge of the well. With time the cells closer to the well were also lysed. This novel hemolysin consists of a protein (component B) which binds to or alters cells, allowing subsequent lysis by a second protein (component L). Sodium dodecyl sulfate-polyacrylamide gel electrophoresis, isoelectric focusing, and Western blot analysis showed that hemolysin BL has properties similar to those described previously for the enterotoxin and that both components are distinct from cereolysin and cereolysin AB. Images PMID:2114359

  3. Diffusive transfer to membranes as an effective interface between gel electrophoresis and mass spectrometry

    NASA Astrophysics Data System (ADS)

    Ogorzalek Loo, Rachel R.; Mitchell, Charles; Stevenson, Tracy I.; Loo, Joseph A.; Andrews, Philip C.

    1997-12-01

    Diffusive transfer was examined as a blotting method to transfer proteins from polyacrylamide gels to membranes for ultraviolet matrix-assisted laser desorption ionization (MALDI) mass spectrometry. The method is well-suited for transfers from isoelectric focusing (IEF) gels. Spectra have been obtained for 11 pmol of 66 kDa albumin loaded onto an IEF gel and subsequently blotted to polyethylene. Similarly, masses of intact carbonic anhydrase and hemoglobin were obtained from 14 and 20 pmol loadings. This methodology is also compatible with blotting high molecular weight proteins, as seen for 6 pmol of the 150 kDa monoclonal antibody anti-[beta]-galactosidase transferred to Goretex. Polypropylene, Teflon, Nafion and polyvinylidene difluoride (PVDF) also produced good spectra following diffusive transfer. Only analysis from PVDF required that the membrane be kept wet prior to application of matrix. Considerations in mass accuracy for analysis from large-area membranes with continuous extraction and delayed extraction were explored, as were remedies for surface charging. Vapor phase CNBr cleavage was applied to membrane-bound samples for peptide mapping.

  4. Functional expression of a cattle MHC class II DR-like antigen on mouse L cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraser, D.C.; Craigmile, S.; Campbell, J.D.M.

    1996-09-01

    Cattle DRA and DRB genes, cloned by reverse-transcription polymerase chain reaction, were transfected into mouse L cells. The cattle DR-expressing L-cell transfectant generated was analyzed serologically, biochemically, and functionally. Sequence analysis of the transfected DRB gene clearly showed showed that it was DRB3 allele DRB3*0101, which corresponds to the 1D-IEF-determined allele DRBF3. 1D-IEF analysis of the tranfectant confirmed that the expressed DR product was DRBF3. Functional integrity of the transfected gene products was demonstrated by the ability of the transfectant cell line to present two antigens (the foot-and-mouth disease virus-derived peptide FMDV15, and ovalbumin) to antigen-specific CD4{sup +} T cellsmore » from both the original animal used to obtain the genes, and also from an unrelated DRBF3{sup +} heterozygous animal. Such transfectants will be invaluable tools, allowing us to dissect the precise contributions each locus product makes to the overall immune response in heterozygous animals, information essential for rational vaccine design. 45 refs., 5 figs., 1 tab.« less

  5. Identification of Psilocybe cubensis spore allergens by immunoprinting.

    PubMed

    Helbling, A; Horner, W E; Lehrer, S B

    1993-01-01

    Previous studies established that Psilocybe cubensis contains potent allergens, and that a significant percentage of atopic subjects were sensitized to P. cubensis spores. The objective of this study was to identify P. cubensis spore allergens using isoelectric focusing (IEF) and sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) immunoprinting. Coomassie blue staining of IEF gels detected approximately 20 bands between pI 3.6 and 9.3. Immunoprints obtained with 15 P. cubensis skin test- and RAST-positive sera revealed 13 IgE-binding bands; the most reactive were at pI 5.0 (80%), 5.6 (87%), 8.7 (80%) and 9.3 (100%). SDS-PAGE resolved 27 proteins ranging from about 13 to 112 kD. SDS-PAGE immunoprints conducted with 11 skin test- and RAST-positive sera demonstrated 18 IgE-binding bands; most sera reacted to 16 (82%), 35 (100%) and 76 kD (91%) allergens. Both electrophoretic procedures demonstrated a single allergen (at pI 9.3 and 35 kD) that reacted with all sera tested. This study corroborates the allergenic significance of P. cubensis spores and identifies the allergens of greatest importance.

  6. Asterism: an integrated, complete, and open-source approach for running seismologist continuous data-intensive analysis on heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.

  7. Diagnostic applications of newborn screening for α-thalassaemias, haemoglobins E and H disorders using isoelectric focusing on dry blood spots.

    PubMed

    Jindatanmanusan, Punyanuch; Riolueang, Suchada; Glomglao, Waraporn; Sukontharangsri, Yaowapa; Chamnanvanakij, Sangkae; Torcharus, Kitti; Viprakasit, Vip

    2014-03-01

    Neonatal screening for haemoglobin (Hb) disorders is a standard of care in several developed countries with the main objective to detect Hb S. Such practice has not been established in Thailand where α-thalassaemia and haemoglobin E (Hb E) are highly prevalent. Early identification of thalassaemias could be helpful and strengthen the programme for prevention and control for severe thalassaemias. Data from isoelectric focusing (IEF) and Isoscan® for detecting types and amount (%) of each haemoglobin in 350 newborn's dried blood spots were analysed and compared with the comprehensive genotype analysis by DNA studies as a gold standard. Based on genetic profiles, there were 10 different categories: (1) normal (n = 227), (2) α(+)-thalassaemia trait (n = 14), (3) α(0)-thalassaemia trait (n = 13), (4) β(0)-thalassaemia trait (n = 7), (5) Hb E trait (n = 72), (6) Hb E trait with α(0)-thalassaemia or homozygous α(+)-thalassaemia (n = 5), (7) Hb E trait with α(+)-thalassaemia trait (n = 5), (8) homozygous Hb E (n = 3), (9) homozygous Hb E with α(0)-thalassaemia trait (n = 1) and (10) Hb H disease (n = 3). The presence of Hb Bart's and Hb E were used to identify cases with α-thalassaemia and Hb E, respectively. We set 0.25% of Hb Bart's and 1.5% of Hb E as a cut-off level to detect α(+)-thalassaemia trait (sensitivity 92.86% and specificity 74.0%) and Hb E trait with 100% of both sensitivity and specificity for IEF diagnosis. Although molecular diagnosis seems to be better for definitive diagnosis of thalassaemia syndromes at birth, however, using our reference range described herein, IEF can be applied in a resource-limiting setting with acceptable reliability.

  8. Performing statistical analyses on quantitative data in Taverna workflows: an example using R and maxdBrowse to identify differentially-expressed genes from microarray data.

    PubMed

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-08-07

    There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Taverna can be used by data analysis experts as a generic tool for composing ad hoc analyses of quantitative data by combining the use of scripts written in the R programming language with tools exposed as services in workflows. When these workflows are shared with colleagues and the wider scientific community, they provide an approach for other scientists wanting to use tools such as R without having to learn the corresponding programming language to analyse their own data.

  9. Performing statistical analyses on quantitative data in Taverna workflows: An example using R and maxdBrowse to identify differentially-expressed genes from microarray data

    PubMed Central

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-01-01

    Background There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Results Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Conclusion Taverna can be used by data analysis experts as a generic tool for composing ad hoc analyses of quantitative data by combining the use of scripts written in the R programming language with tools exposed as services in workflows. When these workflows are shared with colleagues and the wider scientific community, they provide an approach for other scientists wanting to use tools such as R without having to learn the corresponding programming language to analyse their own data. PMID:18687127

  10. CSF free light chain identification of demyelinating disease: comparison with oligoclonal banding and other CSF indexes.

    PubMed

    Gurtner, Kari M; Shosha, Eslam; Bryant, Sandra C; Andreguetto, Bruna D; Murray, David L; Pittock, Sean J; Willrich, Maria Alice V

    2018-02-19

    Cerebrospinal fluid (CSF) used in immunoglobulin gamma (IgG) index testing and oligoclonal bands (OCBs) are common laboratory tests used in the diagnosis of multiple sclerosis. The measurement of CSF free light chains (FLC) could pose as an alternative to the labor-intensive isoelectric-focusing (IEF) gels used for OCBs. A total of 325 residual paired CSF and serum specimens were obtained after physician-ordered OCB IEF testing. CSF kappa (cKFLC) and lambda FLC (cLFLC), albumin and total IgG were measured. Calculations were performed based on combinations of analytes: CSF sum of kappa and lambda ([cKFLC+cLFLC]), kappa-index (K-index) ([cKFLC/sKFLC]/[CSF albumin/serum albumin]), kappa intrathecal fraction (KFLCIF) {([cKFLC/sKFLC]-[0.9358×CSF albumin/serum albumin]^[0.6687×sKFLC]/cKFLC)} and IgG-index ([CSF IgG/CSF albumin]/[serum IgG/serum albumin]). Patients were categorized as: demyelination (n=67), autoimmunity (n=53), non-inflammatory (n=50), inflammation (n=38), degeneration (n=28), peripheral neuropathy (n=24), infection (n=13), cancer (n=11), neuromyelitis optica (n=10) and others (n=31). cKFLC measurement used alone at a cutoff of 0.0611 mg/dL showed >90% agreement to OCBs, similar or better performance than all other calculations, reducing the number of analytes and variables. When cases of demyelinating disease were reviewed, cKFLC measurements showed 86% clinical sensitivity/77% specificity. cKFLC alone demonstrates comparable performance to OCBs along with increased sensitivity for demyelinating diseases. Replacing OCB with cKFLC would alleviate the need for serum and CSF IgG and albumin and calculated conversions. cKFLC can overcome challenges associated with performance, interpretation, and cost of traditional OCBs, reducing costs and maintaining sensitivity and specificity supporting MS diagnosis.

  11. Protein-protein interactions during high-moisture extrusion for fibrous meat analogues and comparison of protein solubility methods using different solvent systems.

    PubMed

    Liu, KeShun; Hsieh, Fu-Hung

    2008-04-23

    Soy protein, mixed with gluten and starch, was extruded into fibrous meat analogues under high-moisture and high-temperature conditions. The protein solubility of samples collected at different extruder zones and extrudates made with different moistures was determined by 11 extraction solutions consisting of 6 selective reagents and their combinations: phosphate salts, urea, DTT, thiourea, Triton X-100, and CHAPS. Protein solubility by most extractants showed decreasing patterns as the material passed through the extruder, but the solution containing all 6 reagents, known as isoelectric focus (IEF) buffer, solubilized the highest levels and equal amounts of proteins in all samples, indicating that there are no other covalent bonds involved besides disulfide bonds. With regard to relative importance between disulfide bonds and non-covalent interactions, different conclusions could be made from protein solubility patterns, depending on the type of extracting systems and a baseline used for comparison. The observation points out pitfalls and limitation of current protein solubility methodology and explains why controversy exists in the literature. Using the IEF buffer system with omission of one or more selective reagents is considered to be the right methodology to conduct protein solubility study and thus recommended. Results obtained with this system indicate that disulfide bonding plays a more important role than non-covalent bonds in not only holding the rigid structure of extrudates but also forming fibrous texture. The sharpest decrease in protein solubility occurred when the mix passed through the intermediate section of the extruder barrel, indicating formation of new disulfide bonds during the stage of dramatic increase in both temperature and moisture. After this stage, although the physical form of the product might undergo change and fiber formation might occur as it passed through the cooling die, the chemical nature of the product did not change significantly.

  12. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    PubMed

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  13. Effects of Fe deficiency on the protein profile of Brassica napus phloem sap

    USDA-ARS?s Scientific Manuscript database

    The aim of this work was to study the effect of Fe deficiency on the protein profile of phloem sap exudates from Brassica napus using 2-DE (IEF-SDS PAGE). The experiment was repeated thrice and two technical replicates per treatment were done. Two hundred sixty-three spots were consistently detected...

  14. Spatiotemporal analysis of tropical disease research combining Europe PMC and affiliation mapping web services.

    PubMed

    Palmblad, Magnus; Torvik, Vetle I

    2017-01-01

    Tropical medicine appeared as a distinct sub-discipline in the late nineteenth century, during a period of rapid European colonial expansion in Africa and Asia. After a dramatic drop after World War II, research on tropical diseases have received more attention and research funding in the twenty-first century. We used Apache Taverna to integrate Europe PMC and MapAffil web services, containing the spatiotemporal analysis workflow from a list of PubMed queries to a list of publication years and author affiliations geoparsed to latitudes and longitudes. The results could then be visualized in the Quantum Geographic Information System (QGIS). Our workflows automatically matched 253,277 affiliations to geographical coordinates for the first authors of 379,728 papers on tropical diseases in a single execution. The bibliometric analyses show how research output in tropical diseases follow major historical shifts in the twentieth century and renewed interest in and funding for tropical disease research in the twenty-first century. They show the effects of disease outbreaks, WHO eradication programs, vaccine developments, wars, refugee migrations, and peace treaties. Literature search and geoparsing web services can be combined in scientific workflows performing a complete spatiotemporal bibliometric analyses of research in tropical medicine. The workflows and datasets are freely available and can be used to reproduce or refine the analyses and test specific hypotheses or look into particular diseases or geographic regions. This work exceeds all previously published bibliometric analyses on tropical diseases in both scale and spatiotemporal range.

  15. GUEST EDITOR'S INTRODUCTION: Guest Editor's introduction

    NASA Astrophysics Data System (ADS)

    Chrysanthis, Panos K.

    1996-12-01

    Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260, USA This special issue focuses on current efforts to represent and support workflows that integrate information systems and human resources within a business or manufacturing enterprise. Workflows may also be viewed as an emerging computational paradigm for effective structuring of cooperative applications involving human users and access to diverse data types not necessarily maintained by traditional database management systems. A workflow is an automated organizational process (also called business process) which consists of a set of activities or tasks that need to be executed in a particular controlled order over a combination of heterogeneous database systems and legacy systems. Within workflows, tasks are performed cooperatively by either human or computational agents in accordance with their roles in the organizational hierarchy. The challenge in facilitating the implementation of workflows lies in developing efficient workflow management systems. A workflow management system (also called workflow server, workflow engine or workflow enactment system) provides the necessary interfaces for coordination and communication among human and computational agents to execute the tasks involved in a workflow and controls the execution orderings of tasks as well as the flow of data that these tasks manipulate. That is, the workflow management system is responsible for correctly and reliably supporting the specification, execution, and monitoring of workflows. The six papers selected (out of the twenty-seven submitted for this special issue of Distributed Systems Engineering) address different aspects of these three functional components of a workflow management system. In the first paper, `Correctness issues in workflow management', Kamath and Ramamritham discuss the important issue of correctness in workflow management that constitutes a prerequisite for the use of workflows in the automation of the critical organizational/business processes. In particular, this paper examines the issues of execution atomicity and failure atomicity, differentiating between correctness requirements of system failures and logical failures, and surveys techniques that can be used to ensure data consistency in workflow management systems. While the first paper is concerned with correctness assuming transactional workflows in which selective transactional properties are associated with individual tasks or the entire workflow, the second paper, `Scheduling workflows by enforcing intertask dependencies' by Attie et al, assumes that the tasks can be either transactions or other activities involving legacy systems. This second paper describes the modelling and specification of conditions involving events and dependencies among tasks within a workflow using temporal logic and finite state automata. It also presents a scheduling algorithm that enforces all stated dependencies by executing at any given time only those events that are allowed by all the dependency automata and in an order as specified by the dependencies. In any system with decentralized control, there is a need to effectively cope with the tension that exists between autonomy and consistency requirements. In `A three-level atomicity model for decentralized workflow management systems', Ben-Shaul and Heineman focus on the specific requirement of enforcing failure atomicity in decentralized, autonomous and interacting workflow management systems. Their paper describes a model in which each workflow manager must be able to specify the sequence of tasks that comprise an atomic unit for the purposes of correctness, and the degrees of local and global atomicity for the purpose of cooperation with other workflow managers. The paper also discusses a realization of this model in which treaties and summits provide an agreement mechanism, while underlying transaction managers are responsible for maintaining failure atomicity. The fourth and fifth papers are experience papers describing a workflow management system and a large scale workflow application, respectively. Schill and Mittasch, in `Workflow management systems on top of OSF DCE and OMG CORBA', describe a decentralized workflow management system and discuss its implementation using two standardized middleware platforms, namely, OSF DCE and OMG CORBA. The system supports a new approach to workflow management, introducing several new concepts such as data type management for integrating various types of data and quality of service for various services provided by servers. A problem common to both database applications and workflows is the handling of missing and incomplete information. This is particularly pervasive in an `electronic market' with a huge number of retail outlets producing and exchanging volumes of data, the application discussed in `Information flow in the DAMA project beyond database managers: information flow managers'. Motivated by the need for a method that allows a task to proceed in a timely manner if not all data produced by other tasks are available by its deadline, Russell et al propose an architectural framework and a language that can be used to detect, approximate and, later on, to adjust missing data if necessary. The final paper, `The evolution towards flexible workflow systems' by Nutt, is complementary to the other papers and is a survey of issues and of work related to both workflow and computer supported collaborative work (CSCW) areas. In particular, the paper provides a model and a categorization of the dimensions which workflow management and CSCW systems share. Besides summarizing the recent advancements towards efficient workflow management, the papers in this special issue suggest areas open to investigation and it is our hope that they will also provide the stimulus for further research and development in the area of workflow management systems.

  16. MassCascade: Visual Programming for LC-MS Data Processing in Metabolomics.

    PubMed

    Beisken, Stephan; Earll, Mark; Portwood, David; Seymour, Mark; Steinbeck, Christoph

    2014-04-01

    Liquid chromatography coupled to mass spectrometry (LC-MS) is commonly applied to investigate the small molecule complement of organisms. Several software tools are typically joined in custom pipelines to semi-automatically process and analyse the resulting data. General workflow environments like the Konstanz Information Miner (KNIME) offer the potential of an all-in-one solution to process LC-MS data by allowing easy integration of different tools and scripts. We describe MassCascade and its workflow plug-in for processing LC-MS data. The Java library integrates frequently used algorithms in a modular fashion, thus enabling it to serve as back-end for graphical front-ends. The functions available in MassCascade have been encapsulated in a plug-in for the workflow environment KNIME, allowing combined use with e.g. statistical workflow nodes from other providers and making the tool intuitive to use without knowledge of programming. The design of the software guarantees a high level of modularity where processing functions can be quickly replaced or concatenated. MassCascade is an open-source library for LC-MS data processing in metabolomics. It embraces the concept of visual programming through its KNIME plug-in, simplifying the process of building complex workflows. The library was validated using open data.

  17. Interacting with the National Database for Autism Research (NDAR) via the LONI Pipeline workflow environment.

    PubMed

    Torgerson, Carinna M; Quinn, Catherine; Dinov, Ivo; Liu, Zhizhong; Petrosyan, Petros; Pelphrey, Kevin; Haselgrove, Christian; Kennedy, David N; Toga, Arthur W; Van Horn, John Darrell

    2015-03-01

    Under the umbrella of the National Database for Clinical Trials (NDCT) related to mental illnesses, the National Database for Autism Research (NDAR) seeks to gather, curate, and make openly available neuroimaging data from NIH-funded studies of autism spectrum disorder (ASD). NDAR has recently made its database accessible through the LONI Pipeline workflow design and execution environment to enable large-scale analyses of cortical architecture and function via local, cluster, or "cloud"-based computing resources. This presents a unique opportunity to overcome many of the customary limitations to fostering biomedical neuroimaging as a science of discovery. Providing open access to primary neuroimaging data, workflow methods, and high-performance computing will increase uniformity in data collection protocols, encourage greater reliability of published data, results replication, and broaden the range of researchers now able to perform larger studies than ever before. To illustrate the use of NDAR and LONI Pipeline for performing several commonly performed neuroimaging processing steps and analyses, this paper presents example workflows useful for ASD neuroimaging researchers seeking to begin using this valuable combination of online data and computational resources. We discuss the utility of such database and workflow processing interactivity as a motivation for the sharing of additional primary data in ASD research and elsewhere.

  18. Quality Metadata Management for Geospatial Scientific Workflows: from Retrieving to Assessing with Online Tools

    NASA Astrophysics Data System (ADS)

    Leibovici, D. G.; Pourabdollah, A.; Jackson, M.

    2011-12-01

    Experts and decision-makers use or develop models to monitor global and local changes of the environment. Their activities require the combination of data and processing services in a flow of operations and spatial data computations: a geospatial scientific workflow. The seamless ability to generate, re-use and modify a geospatial scientific workflow is an important requirement but the quality of outcomes is equally much important [1]. Metadata information attached to the data and processes, and particularly their quality, is essential to assess the reliability of the scientific model that represents a workflow [2]. Managing tools, dealing with qualitative and quantitative metadata measures of the quality associated with a workflow, are, therefore, required for the modellers. To ensure interoperability, ISO and OGC standards [3] are to be adopted, allowing for example one to define metadata profiles and to retrieve them via web service interfaces. However these standards need a few extensions when looking at workflows, particularly in the context of geoprocesses metadata. We propose to fill this gap (i) at first through the provision of a metadata profile for the quality of processes, and (ii) through providing a framework, based on XPDL [4], to manage the quality information. Web Processing Services are used to implement a range of metadata analyses on the workflow in order to evaluate and present quality information at different levels of the workflow. This generates the metadata quality, stored in the XPDL file. The focus is (a) on the visual representations of the quality, summarizing the retrieved quality information either from the standardized metadata profiles of the components or from non-standard quality information e.g., Web 2.0 information, and (b) on the estimated qualities of the outputs derived from meta-propagation of uncertainties (a principle that we have introduced [5]). An a priori validation of the future decision-making supported by the outputs of the workflow once run, is then provided using the meta-propagated qualities, obtained without running the workflow [6], together with the visualization pointing out the need to improve the workflow with better data or better processes on the workflow graph itself. [1] Leibovici, DG, Hobona, G Stock, K Jackson, M (2009) Qualifying geospatial workfow models for adaptive controlled validity and accuracy. In: IEEE 17th GeoInformatics, 1-5 [2] Leibovici, DG, Pourabdollah, A (2010a) Workflow Uncertainty using a Metamodel Framework and Metadata for Data and Processes. OGC TC/PC Meetings, September 2010, Toulouse, France [3] OGC (2011) www.opengeospatial.org [4] XPDL (2008) Workflow Process Definition Interface - XML Process Definition Language.Workflow Management Coalition, Document WfMC-TC-1025, 2008 [5] Leibovici, DG Pourabdollah, A Jackson, M (2011) Meta-propagation of Uncertainties for Scientific Workflow Management in Interoperable Spatial Data Infrastructures. In: Proceedings of the European Geosciences Union (EGU2011), April 2011, Austria [6] Pourabdollah, A Leibovici, DG Jackson, M (2011) MetaPunT: an Open Source tool for Meta-Propagation of uncerTainties in Geospatial Processing. In: Proceedings of OSGIS2011, June 2011, Nottingham, UK

  19. Identification of drug metabolites in human plasma or serum integrating metabolite prediction, LC-HRMS and untargeted data processing.

    PubMed

    Jacobs, Peter L; Ridder, Lars; Ruijken, Marco; Rosing, Hilde; Jager, Nynke Gl; Beijnen, Jos H; Bas, Richard R; van Dongen, William D

    2013-09-01

    Comprehensive identification of human drug metabolites in first-in-man studies is crucial to avoid delays in later stages of drug development. We developed an efficient workflow for systematic identification of human metabolites in plasma or serum that combines metabolite prediction, high-resolution accurate mass LC-MS and MS vendor independent data processing. Retrospective evaluation of predictions for 14 (14)C-ADME studies published in the period 2007-January 2012 indicates that on average 90% of the major metabolites in human plasma can be identified by searching for accurate masses of predicted metabolites. Furthermore, the workflow can identify unexpected metabolites in the same processing run, by differential analysis of samples of drug-dosed subjects and (placebo-dosed, pre-dose or otherwise blank) control samples. To demonstrate the utility of the workflow we applied it to identify tamoxifen metabolites in serum of a breast cancer patient treated with tamoxifen. Previously published metabolites were confirmed in this study and additional metabolites were identified, two of which are discussed to illustrate the advantages of the workflow.

  20. 3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy

    PubMed Central

    Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.

    2017-01-01

    ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312

  1. Executing SADI services in Galaxy.

    PubMed

    Aranguren, Mikel Egaña; González, Alejandro Rodríguez; Wilkinson, Mark D

    2014-01-01

    In recent years Galaxy has become a popular workflow management system in bioinformatics, due to its ease of installation, use and extension. The availability of Semantic Web-oriented tools in Galaxy, however, is limited. This is also the case for Semantic Web Services such as those provided by the SADI project, i.e. services that consume and produce RDF. Here we present SADI-Galaxy, a tool generator that deploys selected SADI Services as typical Galaxy tools. SADI-Galaxy is a Galaxy tool generator: through SADI-Galaxy, any SADI-compliant service becomes a Galaxy tool that can participate in other out-standing features of Galaxy such as data storage, history, workflow creation, and publication. Galaxy can also be used to execute and combine SADI services as it does with other Galaxy tools. Finally, we have semi-automated the packing and unpacking of data into RDF such that other Galaxy tools can easily be combined with SADI services, plugging the rich SADI Semantic Web Service environment into the popular Galaxy ecosystem. SADI-Galaxy bridges the gap between Galaxy, an easy to use but "static" workflow system with a wide user-base, and SADI, a sophisticated, semantic, discovery-based framework for Web Services, thus benefiting both user communities.

  2. Establishment of the optimum two-dimensional electrophoresis system of ovine ovarian tissue.

    PubMed

    Jia, J L; Zhang, L P; Wu, J P; Wang, J; Ding, Q

    2014-08-26

    Lambing performance of sheep is the most important economic trait and is regarded as a critic factoring affecting the productivity in sheep industry. Ovary plays the most roles in lambing trait. To establish the optimum two-dimensional electrophoresis system (2-DE) of ovine ovarian tissue, the common protein extraction methods of animal tissue (trichloroacetic acid/acetone precipitation and direct schizolysis methods) were used to extract ovine ovarian protein, and 17-cm nonlinear immobilized PH 3-10 gradient strips were used for 2-DE. The sample handling, loading quantity of the protein sample, and isoelectric focusing (IEF) steps were manipulated and optimized in this study. The results indicate that the direct schizolysis III method, a 200-μg loading quantity of the protein sample, and IEF steps II (20°C active hydration, 14 h→500 V, 1 h→1000 V 1 h→1000-9000 V, 6 h→80,000 VH→500 V 24 h) are optimal for 2-DE analysis of ovine ovarian tissue. Therefore, ovine ovarian tissue proteomics 2-DE was preliminarily established by the optimized conditions in this study; meanwhile, the conditions identified herein could provide a reference for ovarian sample preparation and 2-DE using tissues from other animals.

  3. a Standardized Approach to Topographic Data Processing and Workflow Management

    NASA Astrophysics Data System (ADS)

    Wheaton, J. M.; Bailey, P.; Glenn, N. F.; Hensleigh, J.; Hudak, A. T.; Shrestha, R.; Spaete, L.

    2013-12-01

    An ever-increasing list of options exist for collecting high resolution topographic data, including airborne LIDAR, terrestrial laser scanners, bathymetric SONAR and structure-from-motion. An equally rich, arguably overwhelming, variety of tools exists with which to organize, quality control, filter, analyze and summarize these data. However, scientists are often left to cobble together their analysis as a series of ad hoc steps, often using custom scripts and one-time processes that are poorly documented and rarely shared with the community. Even when literature-cited software tools are used, the input and output parameters differ from tool to tool. These parameters are rarely archived and the steps performed lost, making the analysis virtually impossible to replicate precisely. What is missing is a coherent, robust, framework for combining reliable, well-documented topographic data-processing steps into a workflow that can be repeated and even shared with others. We have taken several popular topographic data processing tools - including point cloud filtering and decimation as well as DEM differencing - and defined a common protocol for passing inputs and outputs between them. This presentation describes a free, public online portal that enables scientists to create custom workflows for processing topographic data using a number of popular topographic processing tools. Users provide the inputs required for each tool and in what sequence they want to combine them. This information is then stored for future reuse (and optionally sharing with others) before the user then downloads a single package that contains all the input and output specifications together with the software tools themselves. The user then launches the included batch file that executes the workflow on their local computer against their topographic data. This ZCloudTools architecture helps standardize, automate and archive topographic data processing. It also represents a forum for discovering and sharing effective topographic processing workflows.

  4. Integration and Visualization of Translational Medicine Data for Better Understanding of Human Diseases

    PubMed Central

    Satagopam, Venkata; Gu, Wei; Eifes, Serge; Gawron, Piotr; Ostaszewski, Marek; Gebel, Stephan; Barbosa-Silva, Adriano; Balling, Rudi; Schneider, Reinhard

    2016-01-01

    Abstract Translational medicine is a domain turning results of basic life science research into new tools and methods in a clinical environment, for example, as new diagnostics or therapies. Nowadays, the process of translation is supported by large amounts of heterogeneous data ranging from medical data to a whole range of -omics data. It is not only a great opportunity but also a great challenge, as translational medicine big data is difficult to integrate and analyze, and requires the involvement of biomedical experts for the data processing. We show here that visualization and interoperable workflows, combining multiple complex steps, can address at least parts of the challenge. In this article, we present an integrated workflow for exploring, analysis, and interpretation of translational medicine data in the context of human health. Three Web services—tranSMART, a Galaxy Server, and a MINERVA platform—are combined into one big data pipeline. Native visualization capabilities enable the biomedical experts to get a comprehensive overview and control over separate steps of the workflow. The capabilities of tranSMART enable a flexible filtering of multidimensional integrated data sets to create subsets suitable for downstream processing. A Galaxy Server offers visually aided construction of analytical pipelines, with the use of existing or custom components. A MINERVA platform supports the exploration of health and disease-related mechanisms in a contextualized analytical visualization system. We demonstrate the utility of our workflow by illustrating its subsequent steps using an existing data set, for which we propose a filtering scheme, an analytical pipeline, and a corresponding visualization of analytical results. The workflow is available as a sandbox environment, where readers can work with the described setup themselves. Overall, our work shows how visualization and interfacing of big data processing services facilitate exploration, analysis, and interpretation of translational medicine data. PMID:27441714

  5. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  6. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  7. KDE Bioscience: platform for bioinformatics analysis workflows.

    PubMed

    Lu, Qiang; Hao, Pei; Curcin, Vasa; He, Weizhong; Li, Yuan-Yuan; Luo, Qing-Ming; Guo, Yi-Ke; Li, Yi-Xue

    2006-08-01

    Bioinformatics is a dynamic research area in which a large number of algorithms and programs have been developed rapidly and independently without much consideration so far of the need for standardization. The lack of such common standards combined with unfriendly interfaces make it difficult for biologists to learn how to use these tools and to translate the data formats from one to another. Consequently, the construction of an integrative bioinformatics platform to facilitate biologists' research is an urgent and challenging task. KDE Bioscience is a java-based software platform that collects a variety of bioinformatics tools and provides a workflow mechanism to integrate them. Nucleotide and protein sequences from local flat files, web sites, and relational databases can be entered, annotated, and aligned. Several home-made or 3rd-party viewers are built-in to provide visualization of annotations or alignments. KDE Bioscience can also be deployed in client-server mode where simultaneous execution of the same workflow is supported for multiple users. Moreover, workflows can be published as web pages that can be executed from a web browser. The power of KDE Bioscience comes from the integrated algorithms and data sources. With its generic workflow mechanism other novel calculations and simulations can be integrated to augment the current sequence analysis functions. Because of this flexible and extensible architecture, KDE Bioscience makes an ideal integrated informatics environment for future bioinformatics or systems biology research.

  8. Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology.

    PubMed

    Cock, Peter J A; Grüning, Björn A; Paszkiewicz, Konrad; Pritchard, Leighton

    2013-01-01

    The Galaxy Project offers the popular web browser-based platform Galaxy for running bioinformatics tools and constructing simple workflows. Here, we present a broad collection of additional Galaxy tools for large scale analysis of gene and protein sequences. The motivating research theme is the identification of specific genes of interest in a range of non-model organisms, and our central example is the identification and prediction of "effector" proteins produced by plant pathogens in order to manipulate their host plant. This functional annotation of a pathogen's predicted capacity for virulence is a key step in translating sequence data into potential applications in plant pathology. This collection includes novel tools, and widely-used third-party tools such as NCBI BLAST+ wrapped for use within Galaxy. Individual bioinformatics software tools are typically available separately as standalone packages, or in online browser-based form. The Galaxy framework enables the user to combine these and other tools to automate organism scale analyses as workflows, without demanding familiarity with command line tools and scripting. Workflows created using Galaxy can be saved and are reusable, so may be distributed within and between research groups, facilitating the construction of a set of standardised, reusable bioinformatic protocols. The Galaxy tools and workflows described in this manuscript are open source and freely available from the Galaxy Tool Shed (http://usegalaxy.org/toolshed or http://toolshed.g2.bx.psu.edu).

  9. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  10. Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.

    PubMed

    Aranguren, Mikel Egaña; Wilkinson, Mark D

    2015-01-01

    Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.

  11. Multidimensional Interactive Radiology Report and Analysis: standardization of workflow and reporting for renal mass tracking and quantification

    NASA Astrophysics Data System (ADS)

    Hwang, Darryl H.; Ma, Kevin; Yepes, Fernando; Nadamuni, Mridula; Nayyar, Megha; Liu, Brent; Duddalwar, Vinay; Lepore, Natasha

    2015-12-01

    A conventional radiology report primarily consists of a large amount of unstructured text, and lacks clear, concise, consistent and content-rich information. Hence, an area of unmet clinical need consists of developing better ways to communicate radiology findings and information specific to each patient. Here, we design a new workflow and reporting system that combines and integrates advances in engineering technology with those from the medical sciences, the Multidimensional Interactive Radiology Report and Analysis (MIRRA). Until recently, clinical standards have primarily relied on 2D images for the purpose of measurement, but with the advent of 3D processing, many of the manually measured metrics can be automated, leading to better reproducibility and less subjective measurement placement. Hence, we make use this newly available 3D processing in our workflow. Our pipeline is used here to standardize the labeling, tracking, and quantifying of metrics for renal masses.

  12. Towards An Understanding of Mobile Touch Navigation in a Stereoscopic Viewing Environment for 3D Data Exploration.

    PubMed

    López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias

    2016-05-01

    We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.

  13. Genetic analysis of circulating tumor cells in pancreatic cancer patients: A pilot study.

    PubMed

    Görner, Karin; Bachmann, Jeannine; Holzhauer, Claudia; Kirchner, Roland; Raba, Katharina; Fischer, Johannes C; Martignoni, Marc E; Schiemann, Matthias; Alunni-Fabbroni, Marianna

    2015-07-01

    Pancreatic cancer is one of the most aggressive malignant tumors, mainly due to an aggressive metastasis spreading. In recent years, circulating tumor cells became associated to tumor metastasis. Little is known about their expression profiles. The aim of this study was to develop a complete workflow making it possible to isolate circulating tumor cells from patients with pancreatic cancer and their genetic characterization. We show that the proposed workflow offers a technical sensitivity and specificity high enough to detect and isolate single tumor cells. Moreover our approach makes feasible to genetically characterize single CTCs. Our work discloses a complete workflow to detect, count and genetically analyze individual CTCs isolated from blood samples. This method has a central impact on the early detection of metastasis development. The combination of cell quantification and genetic analysis provides the clinicians with a powerful tool not available so far. Copyright © 2015. Published by Elsevier Inc.

  14. A software-aided workflow for precinct-scale residential redevelopment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glackin, Stephen, E-mail: sglackin@swin.edu.au; Trubka, Roman, E-mail: r.trubka@gmail.com; Dionisio, Maria Rita, E-mail: rita.dionisio@canterbury.ac.nz

    2016-09-15

    Growing urban populations, combined with environmental challenges, have placed significant pressure on urban planning to supply housing while addressing policy issues such as sustainability, affordability, and liveability. The interrelated nature of these issues, combined with the requirement of evidence-based planning, has made decision-making so complex that urban planners need to combine expertise on energy, water, carbon emissions, transport and economic development along with other bodies of knowledge necessary to make well-informed decisions. This paper presents two geospatial software systems that can assist in the mediation of complexity, by allowing users to assess a variety of planning metrics without expert knowledgemore » in those disciplines. Using Envision and Envision Scenario Planner (ESP), both products of the Greening the Greyfields research project funded by the Cooperative Research Centre for Spatial Information (CRCSI) in Australia, we demonstrate a workflow for identifying potential redevelopment precincts and designing and assessing possible redevelopment scenarios to optimise planning outcomes.« less

  15. Chairside Fabrication of an All-Ceramic Partial Crown Using a Zirconia-Reinforced Lithium Silicate Ceramic

    PubMed Central

    Pabel, Anne-Kathrin; Rödiger, Matthias

    2016-01-01

    The chairside fabrication of a monolithic partial crown using a zirconia-reinforced lithium silicate (ZLS) ceramic is described. The fully digitized model-free workflow in a dental practice is possible due to the use of a powder-free intraoral scanner and the computer-aided design/computer-assisted manufacturing (CAD/CAM) of the restorations. The innovative ZLS material offers a singular combination of fracture strength (>370 Mpa), optimum polishing characteristics, and excellent optical properties. Therefore, this ceramic is an interesting alternative material for monolithic restorations produced in a digital workflow. PMID:27042362

  16. Formalizing an integrative, multidisciplinary cancer therapy discovery workflow

    PubMed Central

    McGuire, Mary F.; Enderling, Heiko; Wallace, Dorothy I.; Batra, Jaspreet; Jordan, Marie; Kumar, Sushil; Panetta, John C.; Pasquier, Eddy

    2014-01-01

    Although many clinicians and researchers work to understand cancer, there has been limited success to effectively combine forces and collaborate over time, distance, data and budget constraints. Here we present a workflow template for multidisciplinary cancer therapy that was developed during the 2nd Annual Workshop on Cancer Systems Biology sponsored by Tufts University, Boston, MA in July 2012. The template was applied to the development of a metronomic therapy backbone for neuroblastoma. Three primary groups were identified: clinicians, biologists, and scientists (mathematicians, computer scientists, physicists and engineers). The workflow described their integrative interactions; parallel or sequential processes; data sources and computational tools at different stages as well as the iterative nature of therapeutic development from clinical observations to in vitro, in vivo, and clinical trials. We found that theoreticians in dialog with experimentalists could develop calibrated and parameterized predictive models that inform and formalize sets of testable hypotheses, thus speeding up discovery and validation while reducing laboratory resources and costs. The developed template outlines an interdisciplinary collaboration workflow designed to systematically investigate the mechanistic underpinnings of a new therapy and validate that therapy to advance development and clinical acceptance. PMID:23955390

  17. A reliable computational workflow for the selection of optimal screening libraries.

    PubMed

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic components, it can be easily adapted and reproduced by computational groups interested in rational selection of screening libraries. Furthermore, the workflow could be readily modified to include additional components. This workflow has been routinely used in our laboratory for the selection of libraries in multiple projects and consistently selects libraries which are well balanced across multiple parameters.Graphical abstract.

  18. SU-E-T-419: Workflow and FMEA in a New Proton Therapy (PT) Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, C; Wessels, B; Hamilton, H

    2014-06-01

    Purpose: Workflow is an important component in the operational planning of a new proton facility. By integrating the concept of failure mode and effect analysis (FMEA) and traditional QA requirements, a workflow for a proton therapy treatment course is set up. This workflow serves as the blue print for the planning of computer hardware/software requirements and network flow. A slight modification of the workflow generates a process map(PM) for FMEA and the planning of QA program in PT. Methods: A flowchart is first developed outlining the sequence of processes involved in a PT treatment course. Each process consists of amore » number of sub-processes to encompass a broad scope of treatment and QA procedures. For each subprocess, the personnel involved, the equipment needed and the computer hardware/software as well as network requirements are defined by a team of clinical staff, administrators and IT personnel. Results: Eleven intermediate processes with a total of 70 sub-processes involved in a PT treatment course are identified. The number of sub-processes varies, ranging from 2-12. The sub-processes within each process are used for the operational planning. For example, in the CT-Sim process, there are 12 sub-processes: three involve data entry/retrieval from a record-and-verify system, two controlled by the CT computer, two require department/hospital network, and the other five are setup procedures. IT then decides the number of computers needed and the software and network requirement. By removing the traditional QA procedures from the workflow, a PM is generated for FMEA analysis to design a QA program for PT. Conclusion: Significant efforts are involved in the development of the workflow in a PT treatment course. Our hybrid model of combining FMEA and traditional QA program serves a duo purpose of efficient operational planning and designing of a QA program in PT.« less

  19. Seamless online science workflow development and collaboration using IDL and the ENVI Services Engine

    NASA Astrophysics Data System (ADS)

    Harris, A. T.; Ramachandran, R.; Maskey, M.

    2013-12-01

    The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.

  20. Global combined precursor isotopic labeling and isobaric tagging (cPILOT) approach with selective MS(3) acquisition.

    PubMed

    Evans, Adam R; Robinson, Renã A S

    2013-11-01

    Recently, we reported a novel proteomics quantitation scheme termed "combined precursor isotopic labeling and isobaric tagging (cPILOT)" that allows for the identification and quantitation of nitrated peptides in as many as 12-16 samples in a single experiment. cPILOT offers enhanced multiplexing and posttranslational modification specificity, however excludes global quantitation for all peptides present in a mixture and underestimates reporter ion ratios similar to other isobaric tagging methods due to precursor co-isolation. Here, we present a novel chemical workflow for cPILOT that can be used for global tagging of all peptides in a mixture. Specifically, through low pH precursor dimethylation of tryptic or LysC peptides followed by high pH tandem mass tags, the same reporter ion can be used twice in a single experiment. Also, to improve triple-stage mass spectrometry (MS(3) ) data acquisition, a selective MS(3) method that focuses on product selection of the y1 fragment of lysine-terminated peptides is incorporated into the workflow. This novel cPILOT workflow has potential for global peptide quantitation that could lead to enhanced sample multiplexing and increase the number of quantifiable spectra obtained from MS(3) acquisition methods. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. New hardware and workflows for semi-automated correlative cryo-fluorescence and cryo-electron microscopy/tomography.

    PubMed

    Schorb, Martin; Gaechter, Leander; Avinoam, Ori; Sieckmann, Frank; Clarke, Mairi; Bebeacua, Cecilia; Bykov, Yury S; Sonnen, Andreas F-P; Lihl, Reinhard; Briggs, John A G

    2017-02-01

    Correlative light and electron microscopy allows features of interest defined by fluorescence signals to be located in an electron micrograph of the same sample. Rare dynamic events or specific objects can be identified, targeted and imaged by electron microscopy or tomography. To combine it with structural studies using cryo-electron microscopy or tomography, fluorescence microscopy must be performed while maintaining the specimen vitrified at liquid-nitrogen temperatures and in a dry environment during imaging and transfer. Here we present instrumentation, software and an experimental workflow that improves the ease of use, throughput and performance of correlated cryo-fluorescence and cryo-electron microscopy. The new cryo-stage incorporates a specially modified high-numerical aperture objective lens and provides a stable and clean imaging environment. It is combined with a transfer shuttle for contamination-free loading of the specimen. Optimized microscope control software allows automated acquisition of the entire specimen area by cryo-fluorescence microscopy. The software also facilitates direct transfer of the fluorescence image and associated coordinates to the cryo-electron microscope for subsequent fluorescence-guided automated imaging. Here we describe these technological developments and present a detailed workflow, which we applied for automated cryo-electron microscopy and tomography of various specimens. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. A Flexible Workflow for Automated Bioluminescent Kinase Selectivity Profiling.

    PubMed

    Worzella, Tracy; Butzler, Matt; Hennek, Jacquelyn; Hanson, Seth; Simdon, Laura; Goueli, Said; Cowan, Cris; Zegzouti, Hicham

    2017-04-01

    Kinase profiling during drug discovery is a necessary process to confirm inhibitor selectivity and assess off-target activities. However, cost and logistical limitations prevent profiling activities from being performed in-house. We describe the development of an automated and flexible kinase profiling workflow that combines ready-to-use kinase enzymes and substrates in convenient eight-tube strips, a bench-top liquid handling device, ADP-Glo Kinase Assay (Promega, Madison, WI) technology to quantify enzyme activity, and a multimode detection instrument. Automated methods were developed for kinase reactions and quantification reactions to be assembled on a Gilson (Middleton, WI) PIPETMAX, following standardized plate layouts for single- and multidose compound profiling. Pipetting protocols were customized at runtime based on user-provided information, including compound number, increment for compound titrations, and number of kinase families to use. After the automated liquid handling procedures, a GloMax Discover (Promega) microplate reader preloaded with SMART protocols was used for luminescence detection and automatic data analysis. The functionality of the automated workflow was evaluated with several compound-kinase combinations in single-dose or dose-response profiling formats. Known target-specific inhibitions were confirmed. Novel small molecule-kinase interactions, including off-target inhibitions, were identified and confirmed in secondary studies. By adopting this streamlined profiling process, researchers can quickly and efficiently profile compounds of interest on site.

  3. MetaNET--a web-accessible interactive platform for biological metabolic network analysis.

    PubMed

    Narang, Pankaj; Khan, Shawez; Hemrom, Anmol Jaywant; Lynn, Andrew Michael

    2014-01-01

    Metabolic reactions have been extensively studied and compiled over the last century. These have provided a theoretical base to implement models, simulations of which are used to identify drug targets and optimize metabolic throughput at a systemic level. While tools for the perturbation of metabolic networks are available, their applications are limited and restricted as they require varied dependencies and often a commercial platform for full functionality. We have developed MetaNET, an open source user-friendly platform-independent and web-accessible resource consisting of several pre-defined workflows for metabolic network analysis. MetaNET is a web-accessible platform that incorporates a range of functions which can be combined to produce different simulations related to metabolic networks. These include (i) optimization of an objective function for wild type strain, gene/catalyst/reaction knock-out/knock-down analysis using flux balance analysis. (ii) flux variability analysis (iii) chemical species participation (iv) cycles and extreme paths identification and (v) choke point reaction analysis to facilitate identification of potential drug targets. The platform is built using custom scripts along with the open-source Galaxy workflow and Systems Biology Research Tool as components. Pre-defined workflows are available for common processes, and an exhaustive list of over 50 functions are provided for user defined workflows. MetaNET, available at http://metanet.osdd.net , provides a user-friendly rich interface allowing the analysis of genome-scale metabolic networks under various genetic and environmental conditions. The framework permits the storage of previous results, the ability to repeat analysis and share results with other users over the internet as well as run different tools simultaneously using pre-defined workflows, and user-created custom workflows.

  4. Querying Provenance Information: Basic Notions and an Example from Paleoclimate Reconstruction

    NASA Astrophysics Data System (ADS)

    Stodden, V.; Ludaescher, B.; Bocinsky, K.; Kintigh, K.; Kohler, T.; McPhillips, T.; Rush, J.

    2016-12-01

    Computational models are used to reconstruct and explain past environments and to predict likely future environments. For example, Bocinsky and Kohler have performed a 2,000-year reconstruction of the rain-fed maize agricultural niche in the US Southwest. The resulting academic publications not only contain traditional method descriptions, figures, etc. but also links to code and data for basic transparency and reproducibility. Examples include ResearchCompendia.org and the new project "Merging Science and Cyberinfrastructure Pathways: The Whole Tale." Provenance information provides a further critical element to understand a published study and to possibly extend or challenge the findings of the original authors. We present different notions and uses of provenance information using a computational archaeology example, e.g., the common use of "provenance for others" (for transparency and reproducibility), but also the more elusive but equally important use of "provenance for self". To this end, we distinguish prospective provenance (a.k.a. workflow) from retrospective provenance (a.k.a. data lineage) and show how combinations of both forms of provenance can be used to answer different kinds of important questions about a workflow and its execution. Since many workflows are developed using scripting or special purpose languages such as Python and R, we employ an approach and toolkit called YesWorkflow that brings provenance modeling, capture, and querying into the realm of scripting. YesWorkflow employs the basic W3C PROV standard, as well as the ProvONE extension for sharing and exchanging retrospective and prospective provenance information, respectively. Finally, we argue that the utility of provenance information should be maximized by developing different kinds provenance questions and queries during the early phases of computational workflow design and implementation.

  5. Improved framework model to allocate optimal rainwater harvesting sites in small watersheds for agro-forestry uses

    NASA Astrophysics Data System (ADS)

    Terêncio, D. P. S.; Sanches Fernandes, L. F.; Cortes, R. M. V.; Pacheco, F. A. L.

    2017-07-01

    This study introduces an improved rainwater harvesting (RWH) suitability model to help the implementation of agro-forestry projects (irrigation, wildfire combat) in catchments. The model combines a planning workflow to define suitability of catchments based on physical, socio-economic and ecologic variables, with an allocation workflow to constrain suitable RWH sites as function of project specific features (e.g., distance from rainfall collection to application area). The planning workflow comprises a Multi Criteria Analysis (MCA) implemented on a Geographic Information System (GIS), whereas the allocation workflow is based on a multiple-parameter ranking analysis. When compared to other similar models, improvement comes with the flexible weights of MCA and the entire allocation workflow. The method is tested in a contaminated watershed (the Ave River basin) located in Portugal. The pilot project encompasses the irrigation of a 400 ha crop land that consumes 2.69 Mm3 of water per year. The application of harvested water in the irrigation replaces the use of stream water with excessive anthropogenic nutrients that may raise nitrosamines in the food and accumulation in the food chain, with severe consequences to human health (cancer). The selected rainfall collection catchment is capable to harvest 12 Mm3·yr-1 (≈ 4.5 × the requirement) and is roughly 3 km far from the application area assuring crop irrigation by gravity flow with modest transport costs. The RWH system is an 8-meter high that can be built in earth with reduced costs.

  6. Galaxy tools and workflows for sequence analysis with applications in molecular plant pathology

    PubMed Central

    Grüning, Björn A.; Paszkiewicz, Konrad; Pritchard, Leighton

    2013-01-01

    The Galaxy Project offers the popular web browser-based platform Galaxy for running bioinformatics tools and constructing simple workflows. Here, we present a broad collection of additional Galaxy tools for large scale analysis of gene and protein sequences. The motivating research theme is the identification of specific genes of interest in a range of non-model organisms, and our central example is the identification and prediction of “effector” proteins produced by plant pathogens in order to manipulate their host plant. This functional annotation of a pathogen’s predicted capacity for virulence is a key step in translating sequence data into potential applications in plant pathology. This collection includes novel tools, and widely-used third-party tools such as NCBI BLAST+ wrapped for use within Galaxy. Individual bioinformatics software tools are typically available separately as standalone packages, or in online browser-based form. The Galaxy framework enables the user to combine these and other tools to automate organism scale analyses as workflows, without demanding familiarity with command line tools and scripting. Workflows created using Galaxy can be saved and are reusable, so may be distributed within and between research groups, facilitating the construction of a set of standardised, reusable bioinformatic protocols. The Galaxy tools and workflows described in this manuscript are open source and freely available from the Galaxy Tool Shed (http://usegalaxy.org/toolshed or http://toolshed.g2.bx.psu.edu). PMID:24109552

  7. Development of a High-Throughput Ion-Exchange Resin Characterization Workflow.

    PubMed

    Liu, Chun; Dermody, Daniel; Harris, Keith; Boomgaard, Thomas; Sweeney, Jeff; Gisch, Daryl; Goltz, Bob

    2017-06-12

    A novel high-throughout (HTR) ion-exchange (IEX) resin workflow has been developed for characterizing ion exchange equilibrium of commercial and experimental IEX resins against a range of different applications where water environment differs from site to site. Because of its much higher throughput, design of experiment (DOE) methodology can be easily applied for studying the effects of multiple factors on resin performance. Two case studies will be presented to illustrate the efficacy of the combined HTR workflow and DOE method. In case study one, a series of anion exchange resins have been screened for selective removal of NO 3 - and NO 2 - in water environments consisting of multiple other anions, varied pH, and ionic strength. The response surface model (RSM) is developed to statistically correlate the resin performance with the water composition and predict the best resin candidate. In case study two, the same HTR workflow and DOE method have been applied for screening different cation exchange resins in terms of the selective removal of Mg 2+ , Ca 2+ , and Ba 2+ from high total dissolved salt (TDS) water. A master DOE model including all of the cation exchange resins is created to predict divalent cation removal by different IEX resins under specific conditions, from which the best resin candidates can be identified. The successful adoption of HTR workflow and DOE method for studying the ion exchange of IEX resins can significantly reduce the resources and time to address industry and application needs.

  8. SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog

    NASA Astrophysics Data System (ADS)

    Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely

    2014-05-01

    Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.

  9. Summative Objective Structured Clinical Examination Assessment at the End of Anesthesia Residency for Perioperative Ultrasound.

    PubMed

    Mitchell, John D; Amir, Rabia; Montealegre-Gallegos, Mario; Mahmood, Feroze; Shnider, Marc; Mashari, Azad; Yeh, Lu; Bose, Ruma; Wong, Vanessa; Hess, Philip; Amador, Yannis; Jeganathan, Jelliffe; Jones, Stephanie B; Matyal, Robina

    2018-06-01

    While standardized examinations and data from simulators and phantom models can assess knowledge and manual skills for ultrasound, an Objective Structured Clinical Examination (OSCE) could assess workflow understanding. We recruited 8 experts to develop an OSCE to assess workflow understanding in perioperative ultrasound. The experts used a binary grading system to score 19 graduating anesthesia residents at 6 stations. Overall average performance was 86.2%, and 3 stations had an acceptable internal reliability (Kuder-Richardson formula 20 coefficient >0.5). After refinement, this OSCE can be combined with standardized examinations and data from simulators and phantom models to assess proficiency in ultrasound.

  10. From SFM to 3d Print: Automated Workflow Addressed to Practitioner Aimed at the Conservation and Restauration

    NASA Astrophysics Data System (ADS)

    Inzerillo, L.; Di Paola, F.

    2017-08-01

    In In the last years there has been an increasing use of digital techniques for conservation and restoration purposes. Among these, a very dominant rule is played by the use of digital photogrammetry packages (Agisoft Photoscan, 3D Zephir) which allow to obtain in few steps 3D textured models of real objects. Combined with digital documentation technologies digital fabrication technologies can be employed in a variety of ways to assist in heritage documentation, conservation and dissemination. This paper will give to practitioners an overview on the state of the art available technologies and a feasible workflow for optimizing point cloud and polygon mesh datasets for the purpose of fabrication using 3D printing. The goal is to give an important contribute to confer an automation aspect at the whole processing. We tried to individuate a workflow that should be applicable to several types of cases apart from small precautions. In our experimentation we used a DELTA WASP 2040 printer with PLA easyfil.

  11. Metavisitor, a Suite of Galaxy Tools for Simple and Rapid Detection and Discovery of Viruses in Deep Sequence Data

    PubMed Central

    Vernick, Kenneth D.

    2017-01-01

    Metavisitor is a software package that allows biologists and clinicians without specialized bioinformatics expertise to detect and assemble viral genomes from deep sequence datasets. The package is composed of a set of modular bioinformatic tools and workflows that are implemented in the Galaxy framework. Using the graphical Galaxy workflow editor, users with minimal computational skills can use existing Metavisitor workflows or adapt them to suit specific needs by adding or modifying analysis modules. Metavisitor works with DNA, RNA or small RNA sequencing data over a range of read lengths and can use a combination of de novo and guided approaches to assemble genomes from sequencing reads. We show that the software has the potential for quick diagnosis as well as discovery of viruses from a vast array of organisms. Importantly, we provide here executable Metavisitor use cases, which increase the accessibility and transparency of the software, ultimately enabling biologists or clinicians to focus on biological or medical questions. PMID:28045932

  12. Evaluation of Five Chromogenic Agar Media and the Rosco Rapid Carb Screen Kit for Detection and Confirmation of Carbapenemase Production in Gram-Negative Bacilli

    PubMed Central

    Gilmour, Matthew W.; DeGagne, Pat; Nichol, Kim; Karlowsky, James A.

    2014-01-01

    An efficient workflow to screen for and confirm the presence of carbapenemase-producing Gram-negative bacilli was developed by evaluating five chromogenic screening agar media and two confirmatory assays, the Rapid Carb screen test (Rosco Diagnostica A/S, Taastrup, Denmark) and the modified Hodge test. A panel of 150 isolates was used, including 49 carbapenemase-producing isolates representing a variety of β-lactamase enzyme classes. An evaluation of analytical performance, assay cost, and turnaround time indicated that the preferred workflow (screening test followed by confirmatory testing) was the chromID Carba agar medium (bioMérieux, Marcy l'Étoile, France), followed by the Rapid Carb screen test, yielding a combined sensitivity of 89.8% and a specificity of 100%. As an optional component of the workflow, a determination of carbapenemase gene class via molecular means could be performed subsequent to confirmatory testing. PMID:25355764

  13. CASAS: A tool for composing automatically and semantically astrophysical services

    NASA Astrophysics Data System (ADS)

    Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.

    2017-07-01

    Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.

  14. Cyberinfrastructure for End-to-End Environmental Explorations

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Kumar, S.; Song, C.; Zhao, L.; Govindaraju, R.; Niyogi, D.

    2007-12-01

    The design and implementation of a cyberinfrastructure for End-to-End Environmental Exploration (C4E4) is presented. The C4E4 framework addresses the need for an integrated data/computation platform for studying broad environmental impacts by combining heterogeneous data resources with state-of-the-art modeling and visualization tools. With Purdue being a TeraGrid Resource Provider, C4E4 builds on top of the Purdue TeraGrid data management system and Grid resources, and integrates them through a service-oriented workflow system. It allows researchers to construct environmental workflows for data discovery, access, transformation, modeling, and visualization. Using the C4E4 framework, we have implemented an end-to-end SWAT simulation and analysis workflow that connects our TeraGrid data and computation resources. It enables researchers to conduct comprehensive studies on the impact of land management practices in the St. Joseph watershed using data from various sources in hydrologic, atmospheric, agricultural, and other related disciplines.

  15. An access control model with high security for distributed workflow and real-time application

    NASA Astrophysics Data System (ADS)

    Han, Ruo-Fei; Wang, Hou-Xiang

    2007-11-01

    The traditional mandatory access control policy (MAC) is regarded as a policy with strict regulation and poor flexibility. The security policy of MAC is so compelling that few information systems would adopt it at the cost of facility, except some particular cases with high security requirement as military or government application. However, with the increasing requirement for flexibility, even some access control systems in military application have switched to role-based access control (RBAC) which is well known as flexible. Though RBAC can meet the demands for flexibility but it is weak in dynamic authorization and consequently can not fit well in the workflow management systems. The task-role-based access control (T-RBAC) is then introduced to solve the problem. It combines both the advantages of RBAC and task-based access control (TBAC) which uses task to manage permissions dynamically. To satisfy the requirement of system which is distributed, well defined with workflow process and critically for time accuracy, this paper will analyze the spirit of MAC, introduce it into the improved T&RBAC model which is based on T-RBAC. At last, a conceptual task-role-based access control model with high security for distributed workflow and real-time application (A_T&RBAC) is built, and its performance is simply analyzed.

  16. Flexible Early Warning Systems with Workflows and Decision Tables

    NASA Astrophysics Data System (ADS)

    Riedel, F.; Chaves, F.; Zeiner, H.

    2012-04-01

    An essential part of early warning systems and systems for crisis management are decision support systems that facilitate communication and collaboration. Often official policies specify how different organizations collaborate and what information is communicated to whom. For early warning systems it is crucial that information is exchanged dynamically in a timely manner and all participants get exactly the information they need to fulfil their role in the crisis management process. Information technology obviously lends itself to automate parts of the process. We have experienced however that in current operational systems the information logistics processes are hard-coded, even though they are subject to change. In addition, systems are tailored to the policies and requirements of a certain organization and changes can require major software refactoring. We seek to develop a system that can be deployed and adapted to multiple organizations with different dynamic runtime policies. A major requirement for such a system is that changes can be applied locally without affecting larger parts of the system. In addition to the flexibility regarding changes in policies and processes, the system needs to be able to evolve; when new information sources become available, it should be possible to integrate and use these in the decision process. In general, this kind of flexibility comes with a significant increase in complexity. This implies that only IT professionals can maintain a system that can be reconfigured and adapted; end-users are unable to utilise the provided flexibility. In the business world similar problems arise and previous work suggested using business process management systems (BPMS) or workflow management systems (WfMS) to guide and automate early warning processes or crisis management plans. However, the usability and flexibility of current WfMS are limited, because current notations and user interfaces are still not suitable for end-users, and workflows are usually only suited for rigid processes. We show how improvements can be achieved by using decision tables and rule-based adaptive workflows. Decision tables have been shown to be an intuitive tool that can be used by domain experts to express rule sets that can be interpreted automatically at runtime. Adaptive workflows use a rule-based approach to increase the flexibility of workflows by providing mechanisms to adapt workflows based on context changes, human intervention and availability of services. The combination of workflows, decision tables and rule-based adaption creates a framework that opens up new possibilities for flexible and adaptable workflows, especially, for use in early warning and crisis management systems.

  17. U.S. Overseas Military Posture: Relative Costs and Strategic Benefits

    DTIC Science & Technology

    2013-01-01

    C O R P O R A T I O N RESE ARCH BR IEF U.S. Overseas Military Posture Relative Costs and Strategic Benefits The United States is at an inflection...posture translates into benefits ; the risks that different poten- tial postures pose and the cost of maintaining these postures; how these benefits ...changes. Strategic Benefits of Overseas Posture Overseas presence contributes to contingency responsiveness, deterrence of adversaries and assurance of

  18. SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.

    PubMed

    Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B

    2016-02-04

    Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with limited bioinformatics experience to analyze their data and integrate next generation sequencing (NGS) technologies into the classroom. The SPARTA software and tutorial are available at sparta.readthedocs.org.

  19. A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images

    NASA Astrophysics Data System (ADS)

    Möller, Manuel; Tuot, Christopher; Sintek, Michael

    In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.

  20. Data Curation: Improving Environmental Health Data Quality.

    PubMed

    Yang, Lin; Li, Jiao; Hou, Li; Qian, Qing

    2015-01-01

    With the growing recognition of the influence of climate change on human health, scientists' attention to analyzing the relationship between meteorological factors and adverse health effects. However, the paucity of high quality integrated data is one of the great challenges, especially when scientific studies rely on data-intensive computing. This paper aims to design an appropriate curation process to address this problem. We present a data curation workflow that: (i) follows the guidance of DCC Curation Lifecycle Model; (ii) combines manual curation with automatic curation; (iii) and solves environmental health data curation problem. The workflow was applied to a medical knowledge service system and showed that it was capable of improving work efficiency and data quality.

  1. Proceedings of the Ship Control Systems Symposium (6th) Held in Ottawa, Canada on 26-30 October 1981. Supplement

    DTIC Science & Technology

    1981-10-30

    lIefCAL LIBRARY "... ........... . -FRMAi.- NOhL’ FR NO. .. 01 COL BY DR., OTTAWA, ONT - PUBLICATION INFORM1ATION - THESE PAPERS WERE PRINTED JIST AS...PROMINENTLY MAIRKED WITH THE COPYRIGHT SYMBOL AND WAS RELEASED FOR PUBLICATION IN THESE PROCEEDINGS. REQUESTS FOR INFORMA TION REGARDING THE...that was required to ensure this success. The Symposium organizing committee, advisory groups, publications branch, authors, session chairmen

  2. Status of stable isotope enrichment, products, and services at the Oak Ridge National Laboratory

    NASA Astrophysics Data System (ADS)

    Scott Aaron, W.; Tracy, Joe G.; Collins, Emory D.

    1997-02-01

    The Oak Ridge National Laboratory (ORNL) has been supplying enriched stable and radioactive isotopes to the research, medical, and industrial communities for over 50 y. Very significant changes have occurred in this effort over the past several years, and, while many of these changes have had a negative impact on the availability of enriched isotopes, more recent developments are actually improving the situation for both the users and the producers of enriched isotopes. ORNL is still a major producer and distributor of radioisotopes, but future isotope enrichment operations to be conducted at the Isotope Enrichment Facility (IEF) will be limited to stable isotopes. Among the positive changes in the enriched stable isotope area are a well-functioning, long-term contract program, which offers stability and pricing advantages; the resumption of calutron operations; the adoption of prorated conversion charges, which greatly improves the pricing of isotopes to small users; ISO 9002 registration of the IEF's quality management system; and a much more customer-oriented business philosophy. Efforts are also being made to restore and improve upon the extensive chemical and physical form processing capablities that once existed in the enriched stable isotope program. Innovative ideas are being pursued in both technical and administrative areas to encourage the beneficial use of enriched stable isotopes and the development of related technologies.

  3. THE FIRST SPECTROPOLARIMETRIC MONITORING OF THE PECULIAR O4 Ief SUPERGIANT ζ PUPPIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubrig, S.; Ilyin, I.; Kholtygin, A.

    2016-05-10

    The origin of the magnetic field in massive O-type stars is still under debate. To model the physical processes responsible for the generation of O star magnetic fields, it is important to understand whether correlations between the presence of a magnetic field and stellar evolutionary state, rotation velocity, kinematical status, and surface composition can be identified. The O4 Ief supergiant ζ Pup is a fast rotator and a runaway star, which may be a product of a past binary interaction, possibly having had an encounter with the cluster Trumper 10 some 2 Myr ago. The currently available observational material suggestsmore » that certain observed phenomena in this star may be related to the presence of a magnetic field. We acquired spectropolarimetric observations of ζ Pup with FORS 2 mounted on the 8 m Antu telescope of the Very Large Telescope to investigate if a magnetic field is indeed present in this star. We show that many spectral lines are highly variable and probably vary with the recently detected period of 1.78 day. No magnetic field is detected in ζ Pup, as no magnetic field measurement has a significance level higher than 2.4 σ . Still, we studied the probability of a single sinusoidal explaining the variation of the longitudinal magnetic field measurements.« less

  4. Analysis of N- and O-linked protein glycosylation in children with Prader-Willi syndrome.

    PubMed

    Munce, T; Heussler, H S; Bowling, F G

    2010-10-01

    Current genotype-phenotype correlations in Prader-Willi syndrome (PWS) are struggling to give an explanation of the diversity in phenotype and there is a need to move towards a molecular understanding of PWS. A range of functions related to glycoproteins are involved in the pathophysiology of PWS and it may be that abnormal glycosylation is contributing to the biological phenotype. The objective of this study was to investigate the state of N- and O-linked glycosylation in children with Prader-Willi syndrome. Twenty-three children with PWS and 20 non-PWS controls were included in the study. Protein N-linked glycosylation was assessed by analysing serum transferrin through mass spectrometry and protein O-linked through isoelectric focusing (IEF) of serum apolipoprotein C-III (apoC-III), confirmed by mass spectrometry. The results of this analysis indicated that the N-linked glycosylation pathway in PWS is normal. A subgroup of PWS individuals was found to have a hyposialylated pattern of apoC-III isoforms. This was independent of the underlying genetic mechanism and is the first report of an apoC-III IEF abnormality in PWS. This is the first report of apoC-III hyposialylation in PWS. As this field is in its infancy, additional study is required before these findings may be used in clinical settings. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  5. An optimized workflow for the integration of biological information into radiotherapy planning: experiences with T1w DCE-MRI

    NASA Astrophysics Data System (ADS)

    Neff, T.; Kiessling, F.; Brix, G.; Baudendistel, K.; Zechmann, C.; Giesel, F. L.; Bendl, R.

    2005-09-01

    Planning of radiotherapy is often difficult due to restrictions on morphological images. New imaging techniques enable the integration of biological information into treatment planning and help to improve the detection of vital and aggressive tumour areas. This might improve clinical outcome. However, nowadays morphological data sets are still the gold standard in the planning of radiotherapy. In this paper, we introduce an in-house software platform enabling us to combine images from different imaging modalities yielding biological and morphological information in a workflow driven approach. This is demonstrated for the combination of morphological CT, MRI, functional DCE-MRI and PET data. Data of patients with a tumour of the prostate and with a meningioma were examined with DCE-MRI by applying pharmacokinetic two-compartment models for post-processing. The results were compared with the clinical plans for radiation therapy. Generated parameter maps give additional information about tumour spread, which can be incorporated in the definition of safety margins.

  6. A combined LS-SVM & MLR QSAR workflow for predicting the inhibition of CXCR3 receptor by quinazolinone analogs.

    PubMed

    Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Igglessi-Markopoulou, Olga; Kollias, George

    2010-05-01

    A novel QSAR workflow is constructed that combines MLR with LS-SVM classification techniques for the identification of quinazolinone analogs as "active" or "non-active" CXCR3 antagonists. The accuracy of the LS-SVM classification technique for the training set and test was 100% and 90%, respectively. For the "active" analogs a validated MLR QSAR model estimates accurately their I-IP10 IC(50) inhibition values. The accuracy of the QSAR model (R (2) = 0.80) is illustrated using various evaluation techniques, such as leave-one-out procedure (R(LOO2)) = 0.67) and validation through an external test set (R(pred2) = 0.78). The key conclusion of this study is that the selected molecular descriptors, Highest Occupied Molecular Orbital energy (HOMO), Principal Moment of Inertia along X and Y axes PMIX and PMIZ, Polar Surface Area (PSA), Presence of triple bond (PTrplBnd), and Kier shape descriptor ((1) kappa), demonstrate discriminatory and pharmacophore abilities.

  7. Accelerating materials discovery through the development of polymer databases

    NASA Astrophysics Data System (ADS)

    Audus, Debra

    In our line of business we create chemical solutions for a wide range of applications, such as home and personal care, printing and packaging, automotive and structural coatings, and structural plastics and foams applications. In this environment, stable and highly automated workflows suitable to handle complex systems are a must. By satisfying these prerequisites, efficiency for the development of new materials can be significantly improved by combining modeling and experimental approaches. This is in fact in line with recent Materials Genome Initiative efforts sponsored by the US administration. From our experience, we know, that valuable contributions to product development are possible today by combining existing modeling techniques in an intelligent fashion, provided modeling and experiment work closely together. In my presentation I intend to review approaches to build and parameterize soft matter systems. As an example of our standard workflow, I will show a few applications, which include the design of a stabilizer molecule for dispersing polymer particles and the simulation of polystyrene dispersions.

  8. Flexible workflow sharing and execution services for e-scientists

    NASA Astrophysics Data System (ADS)

    Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely

    2013-04-01

    The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.

  9. Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.

    PubMed

    Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell

    2011-07-26

    Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.

  10. Provenance for Runtime Workflow Steering and Validation in Computational Seismology

    NASA Astrophysics Data System (ADS)

    Spinuso, A.; Krischer, L.; Krause, A.; Filgueira, R.; Magnoni, F.; Muraleedharan, V.; David, M.

    2014-12-01

    Provenance systems may be offered by modern workflow engines to collect metadata about the data transformations at runtime. If combined with effective visualisation and monitoring interfaces, these provenance recordings can speed up the validation process of an experiment, suggesting interactive or automated interventions with immediate effects on the lifecycle of a workflow run. For instance, in the field of computational seismology, if we consider research applications performing long lasting cross correlation analysis and high resolution simulations, the immediate notification of logical errors and the rapid access to intermediate results, can produce reactions which foster a more efficient progress of the research. These applications are often executed in secured and sophisticated HPC and HTC infrastructures, highlighting the need for a comprehensive framework that facilitates the extraction of fine grained provenance and the development of provenance aware components, leveraging the scalability characteristics of the adopted workflow engines, whose enactment can be mapped to different technologies (MPI, Storm clusters, etc). This work looks at the adoption of W3C-PROV concepts and data model within a user driven processing and validation framework for seismic data, supporting also computational and data management steering. Validation needs to balance automation with user intervention, considering the scientist as part of the archiving process. Therefore, the provenance data is enriched with community-specific metadata vocabularies and control messages, making an experiment reproducible and its description consistent with the community understandings. Moreover, it can contain user defined terms and annotations. The current implementation of the system is supported by the EU-Funded VERCE (http://verce.eu). It provides, as well as the provenance generation mechanisms, a prototypal browser-based user interface and a web API built on top of a NoSQL storage technology, experimenting ways to ensure a rapid and flexible access to the lineage traces. It supports the users with the visualisation of graphical products and offers combined operations to access and download the data which may be selectively stored at runtime, into dedicated data archives.

  11. A Time-Motion Study of ICU Workflow and the Impact of Strain.

    PubMed

    Hefter, Yosefa; Madahar, Purnema; Eisen, Lewis A; Gong, Michelle N

    2016-08-01

    Understanding ICU workflow and how it is impacted by ICU strain is necessary for implementing effective improvements. This study aimed to quantify how ICU physicians spend time and to examine the impact of ICU strain on workflow. Prospective, observational time-motion study. Five ICUs in two hospitals at an academic medical center. Thirty attending and resident physicians. None. In 137 hours of field observations, the most time-84 hours (62% of total observation time)-was spent on professional communication. Reviewing patient data and documentation occupied a combined 52 hours (38%), whereas direct patient care and education occupied 24 hours (17%) and 13 hours (9%), respectively. The most frequently used tool was the computer, used in tasks that occupied 51 hours (37%). Severity of illness of the ICU on day of observation was the only strain factor that significantly impacted work patterns. In a linear regression model, increase in average ICU Sequential Organ Failure Assessment was associated with more time spent on direct patient care (β = 4.3; 95% CI, 0.9-7.7) and education (β = 3.2; 95% CI, 0.7-5.8), and less time spent on documentation (β = -7.4; 95% CI, -11.6 to -3.2) and on tasks using the computer (β = -7.8; 95% CI, -14.1 to -1.6). These results were more pronounced with a combined strain score that took into account unit census and Sequential Organ Failure Assessment score. After accounting for ICU type (medical vs surgical) and staffing structure (resident staffed vs physician assistant staffed), results changed minimally. Clinicians spend the bulk of their time in the ICU on professional communication and tasks involving computers. With the strain of high severity of illness and a full unit, clinicians reallocate time from documentation to patient care and education. Further efforts are needed to examine system-related aspects of care to understand the impact of workflow and strain on patient care.

  12. Integrating Visualizations into Modeling NEST Simulations

    PubMed Central

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W.

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work. PMID:26733860

  13. Scientist-Centered Workflow Abstractions via Generic Actors, Workflow Templates, and Context-Awareness for Groundwater Modeling and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.

    2011-07-04

    A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adaptedmore » by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.« less

  14. Comparison of experimental and DFT-calculated NMR chemical shifts of 2-amino and 2-hydroxyl substituted phenyl benzimidazoles, benzoxazoles and benzothiazoles in four solvents using the IEF-PCM solvation model.

    PubMed

    Pierens, Gregory K; Venkatachalam, T K; Reutens, David C

    2016-04-01

    A comparative study of experimental and calculated NMR chemical shifts of six compounds comprising 2-amino and 2-hydroxy phenyl benzoxazoles/benzothiazoles/benzimidazoles in four solvents is reported. The benzimidazoles showed interesting spectral characteristics, which are discussed. The proton and carbon chemical shifts were similar for all solvents. The largest chemical shift deviations were observed in benzene. The chemical shifts were calculated with density functional theory using a suite of four functionals and basis set combinations. The calculated chemical shifts revealed a good match to the experimentally observed values in most of the solvents. The mean absolute error was used as the primary metric. The use of an additional metric is suggested, which is based on the order of chemical shifts. The DP4 probability measures were also used to compare the experimental and calculated chemical shifts for each compound in the four solvents. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Differential proteomics analysis of Frankliniella occidentalis immune response after infection with Tomato spotted wilt virus (Tospovirus).

    PubMed

    Ogada, Pamella Akoth; Kiirika, Leonard Muriithi; Lorenz, Christin; Senkler, Jennifer; Braun, Hans-Peter; Poehling, Hans-Michael

    2017-02-01

    Tomato spotted wilt virus (TSWV) is mainly vectored by Frankliniella occidentalis Pergande, and it potentially activates the vector's immune response. However, molecular background of the altered immune response is not clearly understood. Therefore, using a proteomic approach, we investigated the immune pathways that are activated in F. occidentalis larvae after 24 h exposure to TSWV. Two-dimensional isoelectric focusing/sodium dodecyl sulfate polyacrylamide gel electrophoresis (2D-IEF/SDS/PAGE) combined with mass spectrometry (MS), were used to identify proteins that were differentially expressed upon viral infection. High numbers of proteins were abundantly expressed in F. occidentalis exposed to TSWV (73%) compared to the non-exposed (27%), with the majority functionally linked to the innate immune system such as: signaling, stress response, defense response, translation, cellular lipids and nucleotide metabolism. Key proteins included: 70 kDa heat shock proteins, Ubiquitin and Dermcidin, among others, indicative of a responsive pattern of the vector's innate immune system to viral infection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. 2-D zymographic analysis of Broccoli (Brassica oleracea L. var. Italica) florets proteases: follow up of cysteine protease isotypes in the course of post-harvest senescence.

    PubMed

    Rossano, Rocco; Larocca, Marilena; Riccio, Paolo

    2011-09-01

    Zymographic analysis of Broccoli florets (Brassica oleracea L. var. Italica) revealed the presence of acidic metallo-proteases, serine proteases and cysteine proteases. Under conditions which were denaturing for the other proteases, the study was restricted to cysteine proteases. 2-D zymography, a technique that combines IEF and zymography was used to show the presence of 11 different cysteine protease spots with molecular mass of 44 and 47-48kDa and pIs ranging between 4.1 and 4.7. pI differences could be ascribed to different degrees of phosphorylation that partly disappeared in the presence of alkaline phosphatase. Post-harvest senescence of Broccoli florets was characterized by decrease in protein and chlorophyll contents and increase of protease activity. In particular, as determined by 2-D zymography, the presence of cysteine protease clearly increased during senescence, a finding that may represent a useful tool for the control of the aging process. Copyright © 2011 Elsevier GmbH. All rights reserved.

  17. Dynamic computer simulations of electrophoresis: three decades of active research.

    PubMed

    Thormann, Wolfgang; Caslavska, Jitka; Breadmore, Michael C; Mosher, Richard A

    2009-06-01

    Dynamic models for electrophoresis are based upon model equations derived from the transport concepts in solution together with user-inputted conditions. They are able to predict theoretically the movement of ions and are as such the most versatile tool to explore the fundamentals of electrokinetic separations. Since its inception three decades ago, the state of dynamic computer simulation software and its use has progressed significantly and Electrophoresis played a pivotal role in that endeavor as a large proportion of the fundamental and application papers were published in this periodical. Software is available that simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. This has been employed to show the detailed mechanisms of many of the fundamental phenomena that occur in electrophoretic separations. Dynamic electrophoretic simulations are relevant for separations on any scale and instrumental format, including free-fluid preparative, gel, capillary and chip electrophoresis. This review includes a historical overview, a survey of current simulators, simulation examples and a discussion of the applications and achievements of dynamic simulation.

  18. DEWEY: the DICOM-enabled workflow engine system.

    PubMed

    Erickson, Bradley J; Langer, Steve G; Blezek, Daniel J; Ryan, William J; French, Todd L

    2014-06-01

    Workflow is a widely used term to describe the sequence of steps to accomplish a task. The use of workflow technology in medicine and medical imaging in particular is limited. In this article, we describe the application of a workflow engine to improve workflow in a radiology department. We implemented a DICOM-enabled workflow engine system in our department. We designed it in a way to allow for scalability, reliability, and flexibility. We implemented several workflows, including one that replaced an existing manual workflow and measured the number of examinations prepared in time without and with the workflow system. The system significantly increased the number of examinations prepared in time for clinical review compared to human effort. It also met the design goals defined at its outset. Workflow engines appear to have value as ways to efficiently assure that complex workflows are completed in a timely fashion.

  19. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    PubMed

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.

  20. Workflow of the Grover algorithm simulation incorporating CUDA and GPGPU

    NASA Astrophysics Data System (ADS)

    Lu, Xiangwen; Yuan, Jiabin; Zhang, Weiwei

    2013-09-01

    The Grover quantum search algorithm, one of only a few representative quantum algorithms, can speed up many classical algorithms that use search heuristics. No true quantum computer has yet been developed. For the present, simulation is one effective means of verifying the search algorithm. In this work, we focus on the simulation workflow using a compute unified device architecture (CUDA). Two simulation workflow schemes are proposed. These schemes combine the characteristics of the Grover algorithm and the parallelism of general-purpose computing on graphics processing units (GPGPU). We also analyzed the optimization of memory space and memory access from this perspective. We implemented four programs on CUDA to evaluate the performance of schemes and optimization. Through experimentation, we analyzed the organization of threads suited to Grover algorithm simulations, compared the storage costs of the four programs, and validated the effectiveness of optimization. Experimental results also showed that the distinguished program on CUDA outperformed the serial program of libquantum on a CPU with a speedup of up to 23 times (12 times on average), depending on the scale of the simulation.

  1. KNIME4NGS: a comprehensive toolbox for next generation sequencing analysis.

    PubMed

    Hastreiter, Maximilian; Jeske, Tim; Hoser, Jonathan; Kluge, Michael; Ahomaa, Kaarin; Friedl, Marie-Sophie; Kopetzky, Sebastian J; Quell, Jan-Dominik; Mewes, H Werner; Küffner, Robert

    2017-05-15

    Analysis of Next Generation Sequencing (NGS) data requires the processing of large datasets by chaining various tools with complex input and output formats. In order to automate data analysis, we propose to standardize NGS tasks into modular workflows. This simplifies reliable handling and processing of NGS data, and corresponding solutions become substantially more reproducible and easier to maintain. Here, we present a documented, linux-based, toolbox of 42 processing modules that are combined to construct workflows facilitating a variety of tasks such as DNAseq and RNAseq analysis. We also describe important technical extensions. The high throughput executor (HTE) helps to increase the reliability and to reduce manual interventions when processing complex datasets. We also provide a dedicated binary manager that assists users in obtaining the modules' executables and keeping them up to date. As basis for this actively developed toolbox we use the workflow management software KNIME. See http://ibisngs.github.io/knime4ngs for nodes and user manual (GPLv3 license). robert.kueffner@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.

  2. Preservation of protein fluorescence in embedded human dendritic cells for targeted 3D light and electron microscopy

    PubMed Central

    HÖHN, K.; FUCHS, J.; FRÖBER, A.; KIRMSE, R.; GLASS, B.; ANDERS‐ÖSSWEIN, M.; WALTHER, P.; KRÄUSSLICH, H.‐G.

    2015-01-01

    Summary In this study, we present a correlative microscopy workflow to combine detailed 3D fluorescence light microscopy data with ultrastructural information gained by 3D focused ion beam assisted scanning electron microscopy. The workflow is based on an optimized high pressure freezing/freeze substitution protocol that preserves good ultrastructural detail along with retaining the fluorescence signal in the resin embedded specimens. Consequently, cellular structures of interest can readily be identified and imaged by state of the art 3D confocal fluorescence microscopy and are precisely referenced with respect to an imprinted coordinate system on the surface of the resin block. This allows precise guidance of the focused ion beam assisted scanning electron microscopy and limits the volume to be imaged to the structure of interest. This, in turn, minimizes the total acquisition time necessary to conduct the time consuming ultrastructural scanning electron microscope imaging while eliminating the risk to miss parts of the target structure. We illustrate the value of this workflow for targeting virus compartments, which are formed in HIV‐pulsed mature human dendritic cells. PMID:25786567

  3. XML schemas for common bioinformatic data types and their application in workflow systems

    PubMed Central

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  4. Procedural Modeling for Rapid-Prototyping of Multiple Building Phases

    NASA Astrophysics Data System (ADS)

    Saldana, M.; Johanson, C.

    2013-02-01

    RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.

  5. Intuitive presentation of clinical forensic data using anonymous and person-specific 3D reference manikins.

    PubMed

    Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva

    2014-08-01

    The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. CLEW: A Cooperative Learning Environment for the Web.

    ERIC Educational Resources Information Center

    Ribeiro, Marcelo Blois; Noya, Ricardo Choren; Fuks, Hugo

    This paper outlines CLEW (collaborative learning environment for the Web). The project combines MUD (Multi-User Dimension), workflow, VRML (Virtual Reality Modeling Language) and educational concepts like constructivism in a learning environment where students actively participate in the learning process. The MUD shapes the environment structure.…

  7. Scrambled eggs: A highly sensitive molecular diagnostic workflow for Fasciola species specific detection from faecal samples.

    PubMed

    Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David; Šlapeta, Jan

    2017-09-01

    Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74-0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65-88%), compared to the sensitivity (91-100%) of the new molecular diagnostic workflow. Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited.

  8. A Web Interface for Eco System Modeling

    NASA Astrophysics Data System (ADS)

    McHenry, K.; Kooper, R.; Serbin, S. P.; LeBauer, D. S.; Desai, A. R.; Dietze, M. C.

    2012-12-01

    We have developed the Predictive Ecosystem Analyzer (PEcAn) as an open-source scientific workflow system and ecoinformatics toolbox that manages the flow of information in and out of regional-scale terrestrial biosphere models, facilitates heterogeneous data assimilation, tracks data provenance, and enables more effective feedback between models and field research. The over-arching goal of PEcAn is to make otherwise complex analyses transparent, repeatable, and accessible to a diverse array of researchers, allowing both novice and expert users to focus on using the models to examine complex ecosystems rather than having to deal with complex computer system setup and configuration questions in order to run the models. Through the developed web interface we hide much of the data and model details and allow the user to simply select locations, ecosystem models, and desired data sources as inputs to the model. Novice users are guided by the web interface through setting up a model execution and plotting the results. At the same time expert users are given enough freedom to modify specific parameters before the model gets executed. This will become more important as more and more models are added to the PEcAn workflow as well as more and more data that will become available as NEON comes online. On the backend we support the execution of potentially computationally expensive models on different High Performance Computers (HPC) and/or clusters. The system can be configured with a single XML file that gives it the flexibility needed for configuring and running the different models on different systems using a combination of information stored in a database as well as pointers to files on the hard disk. While the web interface usually creates this configuration file, expert users can still directly edit it to fine tune the configuration.. Once a workflow is finished the web interface will allow for the easy creation of plots over result data while also allowing the user to download the results for further processing. The current workflow in the web interface is a simple linear workflow, but will be expanded to allow for more complex workflows. We are working with Kepler and Cyberintegrator to allow for these more complex workflows as well as collecting provenance of the workflow being executed. This provenance regarding model executions is stored in a database along with the derived results. All of this information is then accessible using the BETY database web frontend. The PEcAn interface.

  9. A Cultural Resesources Overview and Reconnaissance Survey of Two Dry Reservoirs, Tazewell County, Illinois,

    DTIC Science & Technology

    1986-01-01

    is locai-; at t - .- tl!l si , r )f the IFarm Creek Section, if th& {< 1 , 1 -, of I cut Dank o r Far r eet,, is t-. -r, -4D , raior. T1 1 sec Iea...ilar situation. in F7rmale . akinI of leavt,s Ior srove l testing will be needed to allow a ief int mv t- or I I b r.s matter -. Tnesco 7et-wns ,a wl

  10. Synoptic Meteorology during the SNOW-ONE-A Field Experiment.

    DTIC Science & Technology

    1983-05-01

    AD ,34 888 SYNOPTIC METEOROLOGY DURING tHE SNOW-ONE A FIELD I EXPERIMENTIUP COLD REGIONS RESEARCH AND ENGINEERING LABHANOVER NN M A BILELLO MAY 83...PROGRAM ELEMENT. PROJECT. TASK U. S. Army Cold Regions Research and AREA & WORK UNIT NUMBERS Engineering Laboratory DA Project 4A762730AT42- Hanover, New...Hampshire 03755 B-El-5 It. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Office of the Ch ief of Engineers May 1983 Washington, D.C. 20314 13

  11. Optical Computations for Image Bandwidth Compression.

    DTIC Science & Technology

    1981-05-15

    UnCl1 ed 3 ’ " * ~ SECURITY CLASSIFICA/ION OF TWIS PA E Doi&e, be,. Enteerdj’ . /j (I) RFRORT DOCUMENTATION PAGE EAPINSTRUCTIONS (I) ~tOT DCUMETATON...Bolling Air Force Base, D. C. 20332 . NME’/PG 14. MONITORING AGENCY NAME 8 AOORESS(/I dilletnt from Controlling Office) IS. SECURITY CLASS. (oI thi ,(il... SECURITY CL.ASS4FICATII_ TI.AGE(W7Ief Deja Entered)2 (3)25imulations.m f a!1 incoherent optical/ *1 video feedback processor.. * Unclassi fled

  12. Inferring Clinical Workflow Efficiency via Electronic Medical Record Utilization

    PubMed Central

    Chen, You; Xie, Wei; Gunter, Carl A; Liebovitz, David; Mehrotra, Sanjay; Zhang, He; Malin, Bradley

    2015-01-01

    Complexity in clinical workflows can lead to inefficiency in making diagnoses, ineffectiveness of treatment plans and uninformed management of healthcare organizations (HCOs). Traditional strategies to manage workflow complexity are based on measuring the gaps between workflows defined by HCO administrators and the actual processes followed by staff in the clinic. However, existing methods tend to neglect the influences of EMR systems on the utilization of workflows, which could be leveraged to optimize workflows facilitated through the EMR. In this paper, we introduce a framework to infer clinical workflows through the utilization of an EMR and show how such workflows roughly partition into four types according to their efficiency. Our framework infers workflows at several levels of granularity through data mining technologies. We study four months of EMR event logs from a large medical center, including 16,569 inpatient stays, and illustrate that over approximately 95% of workflows are efficient and that 80% of patients are on such workflows. At the same time, we show that the remaining 5% of workflows may be inefficient due to a variety of factors, such as complex patients. PMID:26958173

  13. Workflow management systems in radiology

    NASA Astrophysics Data System (ADS)

    Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim

    1998-07-01

    In a situation of shrinking health care budgets, increasing cost pressure and growing demands to increase the efficiency and the quality of medical services, health care enterprises are forced to optimize or complete re-design their processes. Although information technology is agreed to potentially contribute to cost reduction and efficiency improvement, the real success factors are the re-definition and automation of processes: Business Process Re-engineering and Workflow Management. In this paper we discuss architectures for the use of workflow management systems in radiology. We propose to move forward from information systems in radiology (RIS, PACS) to Radiology Management Systems, in which workflow functionality (process definitions and process automation) is implemented through autonomous workflow management systems (WfMS). In a workflow oriented architecture, an autonomous workflow enactment service communicates with workflow client applications via standardized interfaces. In this paper, we discuss the need for and the benefits of such an approach. The separation of workflow management system and application systems is emphasized, and the consequences that arise for the architecture of workflow oriented information systems. This includes an appropriate workflow terminology, and the definition of standard interfaces for workflow aware application systems. Workflow studies in various institutions have shown that most of the processes in radiology are well structured and suited for a workflow management approach. Numerous commercially available Workflow Management Systems (WfMS) were investigated, and some of them, which are process- oriented and application independent, appear suitable for use in radiology.

  14. Data Provenance Hybridization Supporting Extreme-Scale Scientific WorkflowApplications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elsethagen, Todd O.; Stephan, Eric G.; Raju, Bibi

    As high performance computing (HPC) infrastructures continue to grow in capability and complexity, so do the applications that they serve. HPC and distributed-area computing (DAC) (e.g. grid and cloud) users are looking increasingly toward workflow solutions to orchestrate their complex application coupling, pre- and post-processing needs To gain insight and a more quantitative understanding of a workflow’s performance our method includes not only the capture of traditional provenance information, but also the capture and integration of system environment metrics helping to give context and explanation for a workflow’s execution. In this paper, we describe IPPD’s provenance management solution (ProvEn) andmore » its hybrid data store combining both of these data provenance perspectives.« less

  15. Bioinformatics workflows and web services in systems biology made easy for experimentalists.

    PubMed

    Jimenez, Rafael C; Corpas, Manuel

    2013-01-01

    Workflows are useful to perform data analysis and integration in systems biology. Workflow management systems can help users create workflows without any previous knowledge in programming and web services. However the computational skills required to build such workflows are usually above the level most biological experimentalists are comfortable with. In this chapter we introduce workflow management systems that reuse existing workflows instead of creating them, making it easier for experimentalists to perform computational tasks.

  16. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    PubMed Central

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org. PMID:22559942

  17. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  18. Bioanalysis: its past, present, and some future.

    PubMed

    Righetti, Pier Giorgio

    2004-07-01

    An overview of about 100 years of bioanalysis is here disastrously attempted. The beginning of rigorous analytical systems can perhaps be traced back to the building and testing of the analytical ultracentrifuge by Svedberg and the apparatus for moving-boundary electrophoresis of Tiselius, both systems relying on expensive and hard to operate machines. In the sixties, Porath discovered porous beads for the determination of relative molecular mass (Mr) of proteins, based on the principle of molecular sieving. Concomitantly, Svensson and his pupil Vesterberg described a revolutionary principle for fractionating proteins in a nonisocratic environment, based on generation of stable pH gradients in an electric field, a technique that went down to history as isoelectric focusing (IEF). Polyacrylamide gel electrophoresis (PAGE), with the brilliant idea of discontinuous buffers, was brought to the limelight and in 1967, sodium dodecyl sulfate (SDS)-PAGE was described, permitting easy assessment of protein purity and reasonable measurements of Mr values of denatured polypeptide chains. By the mid seventies, another explosive concept was realized: orthogonal combination of two unrelated techniques, based on surface charge and mass fractionation, namely, two-dimensional (2-D) PAGE already in the very first papers by O'Farrell elaborated to its utmost sophistication. The eighties saw the systematic growth of 2-D PAGE, accompanied by systematic efforts to develop instrumentation for large-scale production of 2-D maps and computer evaluation for 2-D map analysis, based on the sophisticated algorithms adopted by astronomers for mapping stars in the sky. Another fundamental innovation in the field of IEF was the discovery of immobilized pH gradients (IPGs) that brought the much needed reproducibility in 2-D maps while allowing exquisite resolution in very narrow pH ranges. The nineties were definitely the decade of capillary zone electrophoresis, with the concomitant concept of automation and miniaturization in electrokinetic methodologies. Also 2-D map analysis witnessed a big revival, thanks to the adoption of IPGs for the first dimension. The enormous progress of mass spectrometry resulted in first reports on the analysis of macromolecules and the building of data bases on gene and protein banks. The third millennium is, perhaps, exasperating the concept of miniaturization at all costs, while not disdaining increasingly larger maps for 2-D analysis of complex protein mixtures.

  19. Mineralization of alpha-1-antitrypsin inclusion bodies in Mmalton alpha-1-antitrypsin deficiency.

    PubMed

    Callea, Francesco; Giovannoni, Isabella; Francalanci, Paola; Boldrini, Renata; Faa, Gavino; Medicina, Daniela; Nobili, Valerio; Desmet, Valeer J; Ishak, Kamal; Seyama, Kuniaki; Bellacchio, Emanuele

    2018-05-16

    Alpha-1-antitrypsin (AAT) deficiency (AATD) of Z, Mmalton, Siiyama type is associated with liver storage of the mutant proteins and liver disease. The Z variant can be diagnosed on isoelectric focusing (IEF) while Mmalton and Siiyama may be missed or misdiagnosed with this technique. Therefore, molecular analysis is mandatory for their characterization. In particular, that holds true for the Mmalton variant as on IEF profile it resembles the wild M2 subtype. This is a retrospective analysis involving review of medical records and of liver biopsy specimens from a series of Mmalton, Z and Siiyama Alpha-1-antitrypsin deficiency patients. The review has been implemented by additional histological stains, electron microscopic observations and 3-D modeling studies of the sites of the mutations. Z, Mmalton and Siiyama liver specimen contained characteristic intrahepatocytic PAS-D globules. The globules differed in the three variants as only Mmalton cases showed dark basophilic precipitates within the AAT inclusions. The precipitates were visualized in haematoxylin-eosin (H.E.) stained preparations and corresponded to calcium precipitates as demonstrated by von Kossa staining. On immunohistochemistry, ZAAT inclusions were stained by polyclonal as well as monoclonal noncommercial anti-AAT antibody (AZT11), whilst Mmalton and Siiyama inclusion bodies remained negative with the monoclonal anti-Z antibody. 3-D protein analysis allowed to predict more severe misfolding of the Mmalton molecule as compared to Z and Siiyama that could trigger anomalous interaction with endoplasmic reticulum chaperon proteins, namely calcium binding proteins. Mmalton AAT inclusion bodies contain calcium precipitates inside them that allow the differential diagnosis with Siiyama and ZAAT inclusions in routine histological sections. The study has confirmed the specificity of the monoclonal AZT11 for the Z mutant. Thus, the combination of these two features is crucial for the distinction between the three variants and for predicting the genotype, whose confirmation would definitely require molecular analysis. Our study provides new data on the pathomorphogenesis of Mmalton inclusion bodies whose mineralization could play a central role in disease pathogenesis of Mmalton that is distinct from the Z and Siiyama variants. Calcium is known to be a major effector of cell death either via the increased intracellular concentration or the alteration of homeostasis.

  20. The VERCE platform: Enabling Computational Seismology via Streaming Workflows and Science Gateways

    NASA Astrophysics Data System (ADS)

    Spinuso, Alessandro; Filgueira, Rosa; Krause, Amrey; Matser, Jonas; Casarotti, Emanuele; Magnoni, Federica; Gemund, Andre; Frobert, Laurent; Krischer, Lion; Atkinson, Malcolm

    2015-04-01

    The VERCE project is creating an e-Science platform to facilitate innovative data analysis and coding methods that fully exploit the wealth of data in global seismology. One of the technologies developed within the project is the Dispel4Py python library, which allows to describe abstract stream-based workflows for data-intensive applications and to execute them in a distributed environment. At runtime Dispel4Py is able to map workflow descriptions dynamically onto a number of computational resources (Apache Storm clusters, MPI powered clusters, and shared-memory multi-core machines, single-core machines), setting it apart from other workflow frameworks. Therefore, Dispel4Py enables scientists to focus on their computation instead of being distracted by details of the computing infrastructure they use. Among the workflows developed with Dispel4Py in VERCE, we mention here those for Seismic Ambient Noise Cross-Correlation and MISFIT calculation, which address two data-intensive problems that are common in computational seismology. The former, also called Passive Imaging, allows the detection of relative seismic-wave velocity variations during the time of recording, to be associated with the stress-field changes that occurred in the test area. The MISFIT instead, takes as input the synthetic seismograms generated from HPC simulations for a certain Earth model and earthquake and, after a preprocessing stage, compares them with real observations in order to foster subsequent model updates and improvement (Inversion). The VERCE Science Gateway exposes the MISFIT calculation workflow as a service, in combination with the simulation phase. Both phases can be configured, controlled and monitored by the user via a rich user interface which is integrated within the gUSE Science Gateway framework, hiding the complexity of accessing third parties data services, security mechanisms and enactment on the target resources. Thanks to a modular extension to the Dispel4Py framework, the system collects provenance data adopting the W3C-PROV data model. Provenance recordings can be explored and analysed at run time for rapid diagnostic and workflow steering, or later for further validation and comparisons across runs. We will illustrate the interactive services of the gateway and the capabilities of the produced metadata, coupled with the VERCE data management layer based on iRODS. The Cross-Correlation workflow was evaluated on SuperMUC, a supercomputing cluster at the Leibniz Supercomputing Centre in Munich, with 155,656 processor cores in 9400 compute nodes. SuperMUC is based on the Intel Xeon architecture consisting of 18 Thin Node Islands and one Fat Node Island. This work has only had access to the Thin Node Islands, which contain Sandy Bridge nodes, each having 16 cores and 32 GB of memory. In the evaluations we used 1000 stations, and we applied two types of methods (whiten and non-whiten) for pre-processing the data. The workflow was tested on a varying number of cores (16, 32, 64, 128, and 256 cores) using the MPI mapping of Dispel4Py. The results show that Dispel4Py is able to improve the performance by increasing the number of cores without changing the description of the workflow.

  1. Workflows for Full Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  2. CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.

    2013-12-01

    As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.

  3. From Peer-Reviewed to Peer-Reproduced in Scholarly Publishing: The Complementary Roles of Data Models and Workflows in Bioinformatics

    PubMed Central

    Zhao, Jun; Avila-Garcia, Maria Susana; Roos, Marco; Thompson, Mark; van der Horst, Eelke; Kaliyaperumal, Rajaram; Luo, Ruibang; Lee, Tin-Lap; Lam, Tak-wah; Edmunds, Scott C.; Sansone, Susanna-Assunta

    2015-01-01

    Motivation Reproducing the results from a scientific paper can be challenging due to the absence of data and the computational tools required for their analysis. In addition, details relating to the procedures used to obtain the published results can be difficult to discern due to the use of natural language when reporting how experiments have been performed. The Investigation/Study/Assay (ISA), Nanopublications (NP), and Research Objects (RO) models are conceptual data modelling frameworks that can structure such information from scientific papers. Computational workflow platforms can also be used to reproduce analyses of data in a principled manner. We assessed the extent by which ISA, NP, and RO models, together with the Galaxy workflow system, can capture the experimental processes and reproduce the findings of a previously published paper reporting on the development of SOAPdenovo2, a de novo genome assembler. Results Executable workflows were developed using Galaxy, which reproduced results that were consistent with the published findings. A structured representation of the information in the SOAPdenovo2 paper was produced by combining the use of ISA, NP, and RO models. By structuring the information in the published paper using these data and scientific workflow modelling frameworks, it was possible to explicitly declare elements of experimental design, variables, and findings. The models served as guides in the curation of scientific information and this led to the identification of inconsistencies in the original published paper, thereby allowing its authors to publish corrections in the form of an errata. Availability SOAPdenovo2 scripts, data, and results are available through the GigaScience Database: http://dx.doi.org/10.5524/100044; the workflows are available from GigaGalaxy: http://galaxy.cbiit.cuhk.edu.hk; and the representations using the ISA, NP, and RO models are available through the SOAPdenovo2 case study website http://isa-tools.github.io/soapdenovo2/. Contact: philippe.rocca-serra@oerc.ox.ac.uk and susanna-assunta.sansone@oerc.ox.ac.uk. PMID:26154165

  4. Successful Completion of FY18/Q1 ASC L2 Milestone 6355: Electrical Analysis Calibration Workflow Capability Demonstration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copps, Kevin D.

    The Sandia Analysis Workbench (SAW) project has developed and deployed a production capability for SIERRA computational mechanics analysis workflows. However, the electrical analysis workflow capability requirements have only been demonstrated in early prototype states, with no real capability deployed for analysts’ use. This milestone aims to improve the electrical analysis workflow capability (via SAW and related tools) and deploy it for ongoing use. We propose to focus on a QASPR electrical analysis calibration workflow use case. We will include a number of new capabilities (versus today’s SAW), such as: 1) support for the XYCE code workflow component, 2) data managementmore » coupled to electrical workflow, 3) human-in-theloop workflow capability, and 4) electrical analysis workflow capability deployed on the restricted (and possibly classified) network at Sandia. While far from the complete set of capabilities required for electrical analysis workflow over the long term, this is a substantial first step toward full production support for the electrical analysts.« less

  5. Proteome Analysis of Thyroid Cancer Cells After Long-Term Exposure to a Random Positioning Machine

    NASA Astrophysics Data System (ADS)

    Pietsch, Jessica; Bauer, Johann; Weber, Gerhard; Nissum, Mikkel; Westphal, Kriss; Egli, Marcel; Grosse, Jirka; Schönberger, Johann; Eilles, Christoph; Infanger, Manfred; Grimm, Daniela

    2011-11-01

    Annulling gravity during cell culturing triggers various types of cells to change their protein expression in a time dependent manner. We therefore decided to determine gravity sensitive proteins and their period of sensitivity to the effects of gravity. In this study, thyroid cancer cells of the ML-1 cell line were cultured under normal gravity (1 g) or in a random positioning machine (RPM), which simulated near weightlessness for 7 and 11 days. Cells were then sonicated and proteins released into the supernatant were separated from those that remained attached to the cell fragments. Subsequently, both types of proteins were fractionated by free-flow isoelectric focussing (FF-IEF). The fractions obtained were further separated by sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) to which comparable FF-IEF fractions derived from cells cultured either under 1 g or on the RPM had been applied side by side. The separation resulted in pairs of lanes, on which a number of identical bands were observed. Selected gel pieces were excised and their proteins determined by mass spectrometry. Equal proteins from cells cultured under normal gravity and the RPM, respectively, were detected in comparable gel pieces. However, many of these proteins had received different Mascot scores. Quantifying heat shock cognate 71 kDa protein, glutathione S-transferase P, nucleoside diphosphate kinase A and annexin-2 by Western blotting using whole cell lysates indicated usefulness of Mascot scores for selecting the most efficient antibodies.

  6. Developing integrated workflows for the digitisation of herbarium specimens using a modular and scalable approach.

    PubMed

    Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J

    2012-01-01

    Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow.The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow.

  7. RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Chen, Nengcheng; Di, Liping

    2012-10-01

    Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.

  8. Scientific Data Management (SDM) Center for Enabling Technologies. 2007-2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludascher, Bertram; Altintas, Ilkay

    Over the past five years, our activities have both established Kepler as a viable scientific workflow environment and demonstrated its value across multiple science applications. We have published numerous peer-reviewed papers on the technologies highlighted in this short paper and have given Kepler tutorials at SC06,SC07,SC08,and SciDAC 2007. Our outreach activities have allowed scientists to learn best practices and better utilize Kepler to address their individual workflow problems. Our contributions to advancing the state-of-the-art in scientific workflows have focused on the following areas. Progress in each of these areas is described in subsequent sections. Workflow development. The development of amore » deeper understanding of scientific workflows "in the wild" and of the requirements for support tools that allow easy construction of complex scientific workflows; Generic workflow components and templates. The development of generic actors (i.e.workflow components and processes) which can be broadly applied to scientific problems; Provenance collection and analysis. The design of a flexible provenance collection and analysis infrastructure within the workflow environment; and, Workflow reliability and fault tolerance. The improvement of the reliability and fault-tolerance of workflow environments.« less

  9. New on-line separation workflow of microbial metabolites via hyphenation of analytical and preparative comprehensive two-dimensional liquid chromatography.

    PubMed

    Yan, Xia; Wang, Li-Juan; Wu, Zhen; Wu, Yun-Long; Liu, Xiu-Xiu; Chang, Fang-Rong; Fang, Mei-Juan; Qiu, Ying-Kun

    2016-10-15

    Microbial metabolites represent an important source of bioactive natural products, but always exhibit diverse of chemical structures or complicated chemical composition with low active ingredients content. Traditional separation methods rely mainly on off-line combination of open-column chromatography and preparative high performance liquid chromatography (HPLC). However, the multi-step and prolonged separation procedure might lead to exposure to oxygen and structural transformation of metabolites. In the present work, a new two-dimensional separation workflow for fast isolation and analysis of microbial metabolites from Chaetomium globosum SNSHI-5, a cytotoxic fungus derived from extreme environment. The advantage of this analytical comprehensive two-dimensional liquid chromatography (2D-LC) lies on its ability to analyze the composition of the metabolites, and to optimize the separation conditions for the preparative 2D-LC. Furthermore, gram scale preparative 2D-LC separation of the crude fungus extract could be performed on a medium-pressure liquid chromatograph×preparative high-performance liquid chromatography system, under the optimized condition. Interestingly, 12 cytochalasan derivatives, including two new compounds named cytoglobosin Ab (3) and isochaetoglobosin Db (8), were successfully obtained with high purity in a short period of time. The structures of the isolated metabolites were comprehensively characterized by HR ESI-MS and NMR. To be highlighted, this is the first report on the combination of analytical and preparative 2D-LC for the separation of microbial metabolites. The new workflow exhibited apparent advantages in separation efficiency and sample treatment capacity compared with conventional methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Plutonium Immobilization and Mobilization by Soil Organic Matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santschi, Peter H.; Schwehr, Kathleen A.; Xu, Chen

    The human and environmental risks associated with Pu disposal, remediation, and nuclear accidents scenarios stems mainly from the very long half-lives of several of its isotopes. The SRS, holding one-third of the nation’s Pu inventory, has a long-term stewardship commitment to investigation of Pu behavior in the groundwater and downgradient vast wetlands. Pu is believed to be essentially immobile due to its low solubility and high particle reactivity to mineral phase or natural organic matter (NOM). For example, in sediments collected from a region of SRS, close to a wetland and a groundwater plume, 239,240Pu concentrations suggest immobilization by NOMmore » compounds, as Pu correlate with NOM contents. Micro-SXRF data indicate, however, that Pu does not correlate with Fe. However, previous studies reported Pu can be transported several kilometers in surface water systems, in the form of a colloidal organic matter carrier, through wind/water interactions. The role of NOM in both immobilizing or re-mobilizing Pu thus has been demonstrated. Our results indicate that more Pu (IV) than (V) was bound to soil colloidal organic matter (COM), amended at far-field concentrations. Contrary to expectations, the presence of NOM in the F-Area soil did not enhance Pu fixation to the organic-rich soil, when compared to the organic-poor soil or the mineral phase from the same soil source, due to the formation of COM-bound Pu. Most importantly, Pu uptake by organic-rich soil decreased with increasing pH because more NOM in the colloidal size desorbed from the particulate fraction at elevated pH, resulting in greater amounts of Pu associated with the COM fraction. This is in contrast to previous observations with low-NOM sediments or minerals, which showed increased Pu uptake with increasing pH levels. This demonstrates that despite Pu immobilization by NOM, COM can convert Pu into a more mobile form. Sediment Pu concentrations in the SRS F-Area wetland were correlated to total organic carbon and total nitrogen contents and even more strongly to hydroxamate siderophore (HS) concentrations. The HS were detected in the particulate or colloidal phases of the sediments but not in the low molecular fractions (< 1000 Da). Macromolecules which scavenged the majority of the potentially mobile Pu were further separated from the bulk mobile organic matter fraction (“water extract”) via isoelectric focusing experiment (IEF). An ESI FTICR-MS spectral comparison of the IEF extract and a siderophore standard (desferrioxamine; DFO) suggested the presence of HS functionalities in the IEF extract.« less

  11. Measuring the Impact of Technology on Nurse Workflow: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Cady, Rhonda Guse

    2012-01-01

    Background. Investment in health information technology (HIT) is rapidly accelerating. The absence of contextual or situational analysis of the environment in which HIT is incorporated makes it difficult to measure success or failure. The methodology introduced in this paper combines observational research with time-motion study to measure the…

  12. Making Information Literacy Instruction More Efficient by Providing Individual Feedback

    ERIC Educational Resources Information Center

    Peter, Johannes; Leichner, Nikolas; Mayer, Anne-Kathrin; Krampen, Günter

    2017-01-01

    This paper presents an approach to information literacy instruction in colleges and universities that combines online and classroom learning (Blended Learning). The concept includes only one classroom seminar, so the approach presented here can replace existing one-shot sessions at colleges and universities without changes to the current workflow.…

  13. A targeted change-detection procedure by combining change vector analysis and post-classification approach

    NASA Astrophysics Data System (ADS)

    Ye, Su; Chen, Dongmei; Yu, Jie

    2016-04-01

    In remote sensing, conventional supervised change-detection methods usually require effective training data for multiple change types. This paper introduces a more flexible and efficient procedure that seeks to identify only the changes that users are interested in, here after referred to as "targeted change detection". Based on a one-class classifier "Support Vector Domain Description (SVDD)", a novel algorithm named "Three-layer SVDD Fusion (TLSF)" is developed specially for targeted change detection. The proposed algorithm combines one-class classification generated from change vector maps, as well as before- and after-change images in order to get a more reliable detecting result. In addition, this paper introduces a detailed workflow for implementing this algorithm. This workflow has been applied to two case studies with different practical monitoring objectives: urban expansion and forest fire assessment. The experiment results of these two case studies show that the overall accuracy of our proposed algorithm is superior (Kappa statistics are 86.3% and 87.8% for Case 1 and 2, respectively), compared to applying SVDD to change vector analysis and post-classification comparison.

  14. Cytoscape: the network visualization tool for GenomeSpace workflows.

    PubMed

    Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P

    2014-01-01

    Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013.

  15. An MRM-based workflow for absolute quantitation of lysine-acetylated metabolic enzymes in mouse liver.

    PubMed

    Xu, Leilei; Wang, Fang; Xu, Ying; Wang, Yi; Zhang, Cuiping; Qin, Xue; Yu, Hongxiu; Yang, Pengyuan

    2015-12-07

    As a key post-translational modification mechanism, protein acetylation plays critical roles in regulating and/or coordinating cell metabolism. Acetylation is a prevalent modification process in enzymes. Protein acetylation modification occurs in sub-stoichiometric amounts; therefore extracting biologically meaningful information from these acetylation sites requires an adaptable, sensitive, specific, and robust method for their quantification. In this work, we combine immunoassays and multiple reaction monitoring-mass spectrometry (MRM-MS) technology to develop an absolute quantification for acetylation modification. With this hybrid method, we quantified the acetylation level of metabolic enzymes, which could demonstrate the regulatory mechanisms of the studied enzymes. The development of this quantitative workflow is a pivotal step for advancing our knowledge and understanding of the regulatory effects of protein acetylation in physiology and pathophysiology.

  16. Cytoscape: the network visualization tool for GenomeSpace workflows

    PubMed Central

    Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P.

    2014-01-01

    Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013. PMID:25165537

  17. Scrambled eggs: A highly sensitive molecular diagnostic workflow for Fasciola species specific detection from faecal samples

    PubMed Central

    Calvani, Nichola Eliza Davies; Windsor, Peter Andrew; Bush, Russell David

    2017-01-01

    Background Fasciolosis, due to Fasciola hepatica and Fasciola gigantica, is a re-emerging zoonotic parasitic disease of worldwide importance. Human and animal infections are commonly diagnosed by the traditional sedimentation and faecal egg-counting technique. However, this technique is time-consuming and prone to sensitivity errors when a large number of samples must be processed or if the operator lacks sufficient experience. Additionally, diagnosis can only be made once the 12-week pre-patent period has passed. Recently, a commercially available coprological antigen ELISA has enabled detection of F. hepatica prior to the completion of the pre-patent period, providing earlier diagnosis and increased throughput, although species differentiation is not possible in areas of parasite sympatry. Real-time PCR offers the combined benefits of highly sensitive species differentiation for medium to large sample sizes. However, no molecular diagnostic workflow currently exists for the identification of Fasciola spp. in faecal samples. Methodology/Principal findings A new molecular diagnostic workflow for the highly-sensitive detection and quantification of Fasciola spp. in faecal samples was developed. The technique involves sedimenting and pelleting the samples prior to DNA isolation in order to concentrate the eggs, followed by disruption by bead-beating in a benchtop homogeniser to ensure access to DNA. Although both the new molecular workflow and the traditional sedimentation technique were sensitive and specific, the new molecular workflow enabled faster sample throughput in medium to large epidemiological studies, and provided the additional benefit of speciation. Further, good correlation (R2 = 0.74–0.76) was observed between the real-time PCR values and the faecal egg count (FEC) using the new molecular workflow for all herds and sampling periods. Finally, no effect of storage in 70% ethanol was detected on sedimentation and DNA isolation outcomes; enabling transport of samples from endemic to non-endemic countries without the requirement of a complete cold chain. The commercially-available ELISA displayed poorer sensitivity, even after adjustment of the positive threshold (65–88%), compared to the sensitivity (91–100%) of the new molecular diagnostic workflow. Conclusions/Significance Species-specific assays for sensitive detection of Fasciola spp. enable ante-mortem diagnosis in both human and animal settings. This includes Southeast Asia where there are potentially many undocumented human cases and where post-mortem examination of production animals can be difficult. The new molecular workflow provides a sensitive and quantitative diagnostic approach for the rapid testing of medium to large sample sizes, potentially superseding the traditional sedimentation and FEC technique and enabling surveillance programs in locations where animal and human health funding is limited. PMID:28915255

  18. Developing integrated workflows for the digitisation of herbarium specimens using a modular and scalable approach

    PubMed Central

    Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J

    2012-01-01

    Abstract Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow. The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow. PMID:22859881

  19. Combined Multidimensional Microscopy as a Histopathology Imaging Tool.

    PubMed

    Shami, Gerald J; Cheng, Delfine; Braet, Filip

    2017-02-01

    Herein, we present a highly versatile bioimaging workflow for the multidimensional imaging of biological structures across vastly different length scales. Such an approach allows for the optimised preparation of samples in one go for consecutive X-ray micro-computed tomography, bright-field light microscopy and backscattered scanning electron microscopy, thus, facilitating the disclosure of combined structural information ranging from the gross tissue or cellular level, down to the nanometre scale. In this current study, we characterize various aspects of the hepatic vasculature, ranging from such large vessels as branches of the hepatic portal vein and hepatic artery, down to the smallest sinusoidal capillaries. By employing high-resolution backscattered scanning electron microscopy, we were able to further characterize the subcellular features of a range of hepatic sinusoidal cells including, liver sinusoidal endothelial cells, pit cells and Kupffer cells. Above all, we demonstrate the capabilities of a specimen manipulation workflow that can be applied and adapted to a plethora of functional and structural investigations and experimental models. Such an approach harnesses the fundamental advantages inherent to the various imaging modalities presented herein, and when combined, offers information not currently available by any single imaging platform. J. Cell. Physiol. 232: 249-256, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Dispel4py: An Open-Source Python library for Data-Intensive Seismology

    NASA Astrophysics Data System (ADS)

    Filgueira, Rosa; Krause, Amrey; Spinuso, Alessandro; Klampanos, Iraklis; Danecek, Peter; Atkinson, Malcolm

    2015-04-01

    Scientific workflows are a necessary tool for many scientific communities as they enable easy composition and execution of applications on computing resources while scientists can focus on their research without being distracted by the computation management. Nowadays, scientific communities (e.g. Seismology) have access to a large variety of computing resources and their computational problems are best addressed using parallel computing technology. However, successful use of these technologies requires a lot of additional machinery whose use is not straightforward for non-experts: different parallel frameworks (MPI, Storm, multiprocessing, etc.) must be used depending on the computing resources (local machines, grids, clouds, clusters) where applications are run. This implies that for achieving the best applications' performance, users usually have to change their codes depending on the features of the platform selected for running them. This work presents dispel4py, a new open-source Python library for describing abstract stream-based workflows for distributed data-intensive applications. Special care has been taken to provide dispel4py with the ability to map abstract workflows to different platforms dynamically at run-time. Currently dispel4py has four mappings: Apache Storm, MPI, multi-threading and sequential. The main goal of dispel4py is to provide an easy-to-use tool to develop and test workflows in local resources by using the sequential mode with a small dataset. Later, once a workflow is ready for long runs, it can be automatically executed on different parallel resources. dispel4py takes care of the underlying mappings by performing an efficient parallelisation. Processing Elements (PE) represent the basic computational activities of any dispel4Py workflow, which can be a seismologic algorithm, or a data transformation process. For creating a dispel4py workflow, users only have to write very few lines of code to describe their PEs and how they are connected by using Python, which is widely supported on many platforms and is popular in many scientific domains, such as in geosciences. Once, a dispel4py workflow is written, a user only has to select which mapping they would like to use, and everything else (parallelisation, distribution of data) is carried on by dispel4py without any cost to the user. Among all dispel4py features we would like to highlight the following: * The PEs are connected by streams and not by writing to and reading from intermediate files, avoiding many IO operations. * The PEs can be stored into a registry. Therefore, different users can recombine PEs in many different workflows. * dispel4py has been enriched with a provenance mechanism to support runtime provenance analysis. We have adopted the W3C-PROV data model, which is accessible via a prototypal browser-based user interface and a web API. It supports the users with the visualisation of graphical products and offers combined operations to access and download the data, which may be selectively stored at runtime, into dedicated data archives. dispel4py has been already used by seismologists in the VERCE project to develop different seismic workflows. One of them is the Seismic Ambient Noise Cross-Correlation workflow, which preprocesses and cross-correlates traces from several stations. First, this workflow was tested on a local machine by using a small number of stations as input data. Later, it was executed on different parallel platforms (SuperMUC cluster, and Terracorrelator machine), automatically scaling up by using MPI and multiprocessing mappings and up to 1000 stations as input data. The results show that the dispel4py achieves scalable performance in both mappings tested on different parallel platforms.

  1. Building asynchronous geospatial processing workflows with web services

    NASA Astrophysics Data System (ADS)

    Zhao, Peisheng; Di, Liping; Yu, Genong

    2012-02-01

    Geoscience research and applications often involve a geospatial processing workflow. This workflow includes a sequence of operations that use a variety of tools to collect, translate, and analyze distributed heterogeneous geospatial data. Asynchronous mechanisms, by which clients initiate a request and then resume their processing without waiting for a response, are very useful for complicated workflows that take a long time to run. Geospatial contents and capabilities are increasingly becoming available online as interoperable Web services. This online availability significantly enhances the ability to use Web service chains to build distributed geospatial processing workflows. This paper focuses on how to orchestrate Web services for implementing asynchronous geospatial processing workflows. The theoretical bases for asynchronous Web services and workflows, including asynchrony patterns and message transmission, are examined to explore different asynchronous approaches to and architecture of workflow code for the support of asynchronous behavior. A sample geospatial processing workflow, issued by the Open Geospatial Consortium (OGC) Web Service, Phase 6 (OWS-6), is provided to illustrate the implementation of asynchronous geospatial processing workflows and the challenges in using Web Services Business Process Execution Language (WS-BPEL) to develop them.

  2. XML schemas for common bioinformatic data types and their application in workflow systems.

    PubMed

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  3. I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.

    Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less

  4. Preservation of protein fluorescence in embedded human dendritic cells for targeted 3D light and electron microscopy.

    PubMed

    Höhn, K; Fuchs, J; Fröber, A; Kirmse, R; Glass, B; Anders-Össwein, M; Walther, P; Kräusslich, H-G; Dietrich, C

    2015-08-01

    In this study, we present a correlative microscopy workflow to combine detailed 3D fluorescence light microscopy data with ultrastructural information gained by 3D focused ion beam assisted scanning electron microscopy. The workflow is based on an optimized high pressure freezing/freeze substitution protocol that preserves good ultrastructural detail along with retaining the fluorescence signal in the resin embedded specimens. Consequently, cellular structures of interest can readily be identified and imaged by state of the art 3D confocal fluorescence microscopy and are precisely referenced with respect to an imprinted coordinate system on the surface of the resin block. This allows precise guidance of the focused ion beam assisted scanning electron microscopy and limits the volume to be imaged to the structure of interest. This, in turn, minimizes the total acquisition time necessary to conduct the time consuming ultrastructural scanning electron microscope imaging while eliminating the risk to miss parts of the target structure. We illustrate the value of this workflow for targeting virus compartments, which are formed in HIV-pulsed mature human dendritic cells. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  5. Automated Purification of Recombinant Proteins: Combining High-throughput with High Yield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chiann Tso; Moore, Priscilla A.; Auberry, Deanna L.

    2006-05-01

    Protein crystallography, mapping protein interactions and other approaches of current functional genomics require not only purifying large numbers of proteins but also obtaining sufficient yield and homogeneity for downstream high-throughput applications. There is a need for the development of robust automated high-throughput protein expression and purification processes to meet these requirements. We developed and compared two alternative workflows for automated purification of recombinant proteins based on expression of bacterial genes in Escherichia coli: First - a filtration separation protocol based on expression of 800 ml E. coli cultures followed by filtration purification using Ni2+-NTATM Agarose (Qiagen). Second - a smallermore » scale magnetic separation method based on expression in 25 ml cultures of E.coli followed by 96-well purification on MagneHisTM Ni2+ Agarose (Promega). Both workflows provided comparable average yields of proteins about 8 ug of purified protein per unit of OD at 600 nm of bacterial culture. We discuss advantages and limitations of the automated workflows that can provide proteins more than 90 % pure in the range of 100 ug – 45 mg per purification run as well as strategies for optimization of these protocols.« less

  6. Workflow-Based Software Development Environment

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    2013-01-01

    The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment

  7. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  8. The CARMEN software as a service infrastructure.

    PubMed

    Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim

    2013-01-28

    The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.

  9. Classical workflow nets and workflow nets with reset arcs: using Lyapunov stability for soundness verification

    NASA Astrophysics Data System (ADS)

    Clempner, Julio B.

    2017-01-01

    This paper presents a novel analytical method for soundness verification of workflow nets and reset workflow nets, using the well-known stability results of Lyapunov for Petri nets. We also prove that the soundness property is decidable for workflow nets and reset workflow nets. In addition, we provide evidence of several outcomes related with properties such as boundedness, liveness, reversibility and blocking using stability. Our approach is validated theoretically and by a numerical example related to traffic signal-control synchronisation.

  10. Combined spectroscopic and quantum chemical studies of ezetimibe

    NASA Astrophysics Data System (ADS)

    Prajapati, Preeti; Pandey, Jaya; Shimpi, Manishkumar R.; Srivastava, Anubha; Tandon, Poonam; Velaga, Sitaram P.; Sinha, Kirti

    2016-12-01

    Ezetimibe (EZT) is a hypocholesterolemic agent used for the treatment of elevated blood cholesterol levels as it lowers the blood cholesterol by blocking the absorption of cholesterol in intestine. Study aims to combine experimental and computational methods to provide insights into the structural and vibrational spectroscopic properties of EZT which is important for explaining drug substance physical and biological properties. Computational study on molecular properties of ezetimibe is presented using density functional theory (DFT) with B3LYP functional and 6-311++G(d,p) basis set. A detailed vibrational assignment has been done for the observed IR and Raman spectra of EZT. In addition to the conformational study, hydrogen bonding and molecular docking studies have been also performed. For conformational studies, the double well potential energy curves have been plotted for the rotation around the six flexible bonds of the molecule. UV absorption spectrum was examined in methanol solvent and compared with calculated one in solvent environment (IEF-PCM) using TD-DFT/6-31G basis set. HOMO-LUMO energy gap of both the conformers have also been calculated in order to predict its chemical reactivity and stability. The stability of the molecule was also examined by means of natural bond analysis (NBO) analysis. To account for the chemical reactivity and site selectivity of the molecules, molecular electrostatic potential (MEPS) map has been plotted. The combination of experimental and calculated results provide an insight into the structural and vibrational spectroscopic properties of EZT. In order to give an insight for the biological activity of EZT, molecular docking of EZT with protein NPC1L1 has been done.

  11. Biowep: a workflow enactment portal for bioinformatics applications.

    PubMed

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.

  12. Biowep: a workflow enactment portal for bioinformatics applications

    PubMed Central

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-01-01

    Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563

  13. Introducing students to digital geological mapping: A workflow based on cheap hardware and free software

    NASA Astrophysics Data System (ADS)

    Vrabec, Marko; Dolžan, Erazem

    2016-04-01

    The undergraduate field course in Geological Mapping at the University of Ljubljana involves 20-40 students per year, which precludes the use of specialized rugged digital field equipment as the costs would be way beyond the capabilities of the Department. A different mapping area is selected each year with the aim to provide typical conditions that a professional geologist might encounter when doing fieldwork in Slovenia, which includes rugged relief, dense tree cover, and moderately-well- to poorly-exposed bedrock due to vegetation and urbanization. It is therefore mandatory that the digital tools and workflows are combined with classical methods of fieldwork, since, for example, full-time precise GNSS positioning is not viable under such circumstances. Additionally, due to the prevailing combination of complex geological structure with generally poor exposure, students cannot be expected to produce line (vector) maps of geological contacts on the go, so there is no need for such functionality in hardware and software that we use in the field. Our workflow therefore still relies on paper base maps, but is strongly complemented with digital tools to provide robust positioning, track recording, and acquisition of various point-based data. Primary field hardware are students' Android-based smartphones and optionally tablets. For our purposes, the built-in GNSS chips provide adequate positioning precision most of the time, particularly if they are GLONASS-capable. We use Oruxmaps, a powerful free offline map viewer for the Android platform, which facilitates the use of custom-made geopositioned maps. For digital base maps, which we prepare in free Windows QGIS software, we use scanned topographic maps provided by the National Geodetic Authority, but also other maps such as aerial imagery, processed Digital Elevation Models, scans of existing geological maps, etc. Point data, like important outcrop locations or structural measurements, are entered into Oruxmaps as waypoints. Students are also encouraged to directly measure structural data with specialized Android apps such as the MVE FieldMove Clino. Digital field data is exported from Oruxmaps to Windows computers primarily in the ubiquitous GPX data format and then integrated in the QGIS environment. Recorded GPX tracks are also used with the free Geosetter Windows software to geoposition and tag any digital photographs taken in the field. With minimal expenses, our workflow provides the students with basic familiarity and experience in using digital field tools and methods. The workflow is also practical enough for the prevailing field conditions of Slovenia that the faculty staff is using it in geological mapping for scientific research and consultancy work.

  14. International Conference on Indium Phosphide and Related Materials (22nd) (IPRM) held on 31 May-4 Jun 2010, at Takamatsu Symbol Tower, Kagawa, Japan

    DTIC Science & Technology

    2010-08-13

    and S. Bollaert ANODE & EPIPHY groups IEMN, UMR CNRS 8520 Villeneuve d’Ascq. France aiirelicn.olivierV/’ed.univ-lillel .IV F. Martin . O...Desplats CEA / LETI Grenoble. France J. Saint- Martin . M. Shi IEF. UMR CNRS 8622 Orsay. France Y. Wang. M.P. Chauvat, P. Rutcrana CIMAP. UMR CNRS 8252...efficiency Network Device Project" which PETRA contracted with New Energy and Industrial Technology Development Organization (NEDO). V. Summary References

  15. Tallying the U.S.-China Military Scorecard: Relative Capabilities and the Balance of Power, 1996-2017

    DTIC Science & Technology

    2015-01-01

    C O R P O R A T I O N RESE ARCH BR IEF Tallying the U.S.-China Military Scorecard Relative Capabilities and the Balance of Power, 1996–2017 Over the...sources, the scorecards provide a basis for deeper public discussion of how the balance of power in Asia has evolved and the challenges the United...SUBTITLE Tallying the U.S.-China Military Scorecard : Relative Capabilities and the Balance of Power, 1996-2017 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  16. Advances in Navigation Sensors and Integration Technology (Les avancees en matiere de capteurs de navigation et de technologies d’integration)

    DTIC Science & Technology

    2004-02-01

    also referred to as a Foucault pendulum gyroscope. Rate about the z-axis (i.e., about the vertical post) is detected by the Coriolis acceleration...paper, DGA/STTC/DTGN: Eric PLESKA MBDA F: Jacky GROSSET SAGEM SA: Jean Michel CARON THALES Avionics; Charles DUSSURGEY CEA-LETI...Gilles DELAPIERRE CEM2/Montpellier: André BOYER IEF: Alain BOSSEBOEUF LPMO: Michel de la BACHELERIE ONERA: Pierre TOUBOUL ²²²²²²²²²²²² RTO

  17. Exploring the impact of an automated prescription-filling device on community pharmacy technician workflow.

    PubMed

    Walsh, Kristin E; Chui, Michelle Anne; Kieser, Mara A; Williams, Staci M; Sutter, Susan L; Sutter, John G

    2011-01-01

    To explore community pharmacy technician workflow change after implementation of an automated robotic prescription-filling device. At an independent community pharmacy in rural Mayville, WI, pharmacy technicians were observed before and 3 months after installation of an automated robotic prescription-filling device. The main outcome measures were sequences and timing of technician workflow steps, workflow interruptions, automation surprises, and workarounds. Of the 77 and 80 observations made before and 3 months after robot installation, respectively, 17 different workflow sequences were observed before installation and 38 after installation. Average prescription filling time was reduced by 40 seconds per prescription with use of the robot. Workflow interruptions per observation increased from 1.49 to 1.79 (P = 0.11), and workarounds increased from 10% to 36% after robot use. Although automated prescription-filling devices can increase efficiency, workflow interruptions and workarounds may negate that efficiency. Assessing changes in workflow and sequencing of tasks that may result from the use of automation can help uncover opportunities for workflow policy and procedure redesign.

  18. JTSA: an open source framework for time series abstractions.

    PubMed

    Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana

    2015-10-01

    The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Facilitating hydrological data analysis workflows in R: the RHydro package

    NASA Astrophysics Data System (ADS)

    Buytaert, Wouter; Moulds, Simon; Skoien, Jon; Pebesma, Edzer; Reusser, Dominik

    2015-04-01

    The advent of new technologies such as web-services and big data analytics holds great promise for hydrological data analysis and simulation. Driven by the need for better water management tools, it allows for the construction of much more complex workflows, that integrate more and potentially more heterogeneous data sources with longer tool chains of algorithms and models. With the scientific challenge of designing the most adequate processing workflow comes the technical challenge of implementing the workflow with a minimal risk for errors. A wide variety of new workbench technologies and other data handling systems are being developed. At the same time, the functionality of available data processing languages such as R and Python is increasing at an accelerating pace. Because of the large diversity of scientific questions and simulation needs in hydrology, it is unlikely that one single optimal method for constructing hydrological data analysis workflows will emerge. Nevertheless, languages such as R and Python are quickly gaining popularity because they combine a wide array of functionality with high flexibility and versatility. The object-oriented nature of high-level data processing languages makes them particularly suited for the handling of complex and potentially large datasets. In this paper, we explore how handling and processing of hydrological data in R can be facilitated further by designing and implementing a set of relevant classes and methods in the experimental R package RHydro. We build upon existing efforts such as the sp and raster packages for spatial data and the spacetime package for spatiotemporal data to define classes for hydrological data (HydroST). In order to handle simulation data from hydrological models conveniently, a HM class is defined. Relevant methods are implemented to allow for an optimal integration of the HM class with existing model fitting and simulation functionality in R. Lastly, we discuss some of the design challenges of the RHydro package, including integration with big data technologies, web technologies, and emerging data models in hydrology.

  20. Introducing W.A.T.E.R.S.: a workflow for the alignment, taxonomy, and ecology of ribosomal sequences.

    PubMed

    Hartman, Amber L; Riddle, Sean; McPhillips, Timothy; Ludäscher, Bertram; Eisen, Jonathan A

    2010-06-12

    For more than two decades microbiologists have used a highly conserved microbial gene as a phylogenetic marker for bacteria and archaea. The small-subunit ribosomal RNA gene, also known as 16 S rRNA, is encoded by ribosomal DNA, 16 S rDNA, and has provided a powerful comparative tool to microbial ecologists. Over time, the microbial ecology field has matured from small-scale studies in a select number of environments to massive collections of sequence data that are paired with dozens of corresponding collection variables. As the complexity of data and tool sets have grown, the need for flexible automation and maintenance of the core processes of 16 S rDNA sequence analysis has increased correspondingly. We present WATERS, an integrated approach for 16 S rDNA analysis that bundles a suite of publicly available 16 S rDNA analysis software tools into a single software package. The "toolkit" includes sequence alignment, chimera removal, OTU determination, taxonomy assignment, phylogentic tree construction as well as a host of ecological analysis and visualization tools. WATERS employs a flexible, collection-oriented 'workflow' approach using the open-source Kepler system as a platform. By packaging available software tools into a single automated workflow, WATERS simplifies 16 S rDNA analyses, especially for those without specialized bioinformatics, programming expertise. In addition, WATERS, like some of the newer comprehensive rRNA analysis tools, allows researchers to minimize the time dedicated to carrying out tedious informatics steps and to focus their attention instead on the biological interpretation of the results. One advantage of WATERS over other comprehensive tools is that the use of the Kepler workflow system facilitates result interpretation and reproducibility via a data provenance sub-system. Furthermore, new "actors" can be added to the workflow as desired and we see WATERS as an initial seed for a sizeable and growing repository of interoperable, easy-to-combine tools for asking increasingly complex microbial ecology questions.

  1. Generic worklist handler for workflow-enabled products

    NASA Astrophysics Data System (ADS)

    Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas

    1999-07-01

    Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.

  2. Workflow with pitfalls to derive a regional airborne magnetic compilation

    NASA Astrophysics Data System (ADS)

    Brönner, Marco; Baykiev, Eldar; Ebbing, Jörg

    2017-04-01

    Today, large scale magnetic maps are usually a patchwork of different airborne surveys from different size, different resolution and different years. Airborne magnetic acquisition is a fast and economic method to map and gain geological and tectonic information for large areas, onshore and offshore. Depending on the aim of a survey, acquisition parameters like altitude and profile distance are usually adjusted to match the purpose of investigation. The subsequent data processing commonly follows a standardized workflow comprising core-field subtraction and line leveling to yield a coherent crustal field magnetic grid for a survey area. The resulting data makes it possible to correlate with geological and tectonic features in the subsurface, which is of importance for e.g. oil and mineral exploration. Crustal scale magnetic interpretation and modeling demand regional compilation of magnetic data and the merger of adjacent magnetic surveys. These studies not only focus on shallower sources, reflected by short to intermediate magnetic wavelength anomalies, but also have a particular interest in the long wavelength deriving from deep seated sources. However, whilst the workflow to produce such a merger is supported by quite a few powerful routines, the resulting compilation contains several pitfalls and limitations, which were discussed before, but still are very little recognized. The maximum wavelength that can be resolved of each individual survey is directly related to the survey size and consequently a merger will contribute erroneous long-wavelength components in the magnetic data compilation. To minimize this problem and to homogenous the longer wavelengths, a first order approach is the combination of airborne and satellite magnetic data commonly combined with the compilation from airborne data, which is sufficient only under particular preconditions. A more advanced approach considers the gap in frequencies between airborne and satellite data, which motivated countries like Sweden and Australia (AWAGS) to collect high altitude- long distance airborne magnetic data for the entire country to homogenous the high-resolution magnetic data before the merger with satellite data. We present the compilation of a regional magnetic map for an area in northern Europe and discuss the problems and pitfalls for a common workflow applied.

  3. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  4. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE PAGES

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...

    2016-07-21

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  5. Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, Valerie; Fox, James; Ghoshal, Devarshi

    The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less

  6. Standardizing clinical trials workflow representation in UML for international site comparison.

    PubMed

    de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M O; Rodrigues, Maria J; Shah, Jatin; Loures, Marco R; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo

    2010-11-09

    With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.

  7. Standardizing Clinical Trials Workflow Representation in UML for International Site Comparison

    PubMed Central

    de Carvalho, Elias Cesar Araujo; Jayanti, Madhav Kishore; Batilana, Adelia Portero; Kozan, Andreia M. O.; Rodrigues, Maria J.; Shah, Jatin; Loures, Marco R.; Patil, Sunita; Payne, Philip; Pietrobon, Ricardo

    2010-01-01

    Background With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. Methods Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. Results Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. Conclusions This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows. PMID:21085484

  8. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.

    2008-12-01

    NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  9. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.

    2009-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  10. Visualisation methods for large provenance collections in data-intensive collaborative platforms

    NASA Astrophysics Data System (ADS)

    Spinuso, Alessandro; Fligueira, Rosa; Atkinson, Malcolm; Gemuend, Andre

    2016-04-01

    This work investigates improving the methods of visually representing provenance information in the context of modern data-driven scientific research. It explores scenarios where data-intensive workflows systems are serving communities of researchers within collaborative environments, supporting the sharing of data and methods, and offering a variety of computation facilities, including HPC, HTC and Cloud. It focuses on the exploration of big-data visualization techniques aiming at producing comprehensive and interactive views on top of large and heterogeneous provenance data. The same approach is applicable to control-flow and data-flow workflows or to combinations of the two. This flexibility is achieved using the W3C-PROV recommendation as a reference model, especially its workflow oriented profiles such as D-PROV (Messier et al. 2013). Our implementation is based on the provenance records produced by the dispel4py data-intensive processing library (Filgueira et al. 2015). dispel4py is an open-source Python framework for describing abstract stream-based workflows for distributed data-intensive applications, developed during the VERCE project. dispel4py enables scientists to develop their scientific methods and applications on their laptop and then run them at scale on a wide range of e-Infrastructures (Cloud, Cluster, etc.) without making changes. Users can therefore focus on designing their workflows at an abstract level, describing actions, input and output streams, and how they are connected. The dispel4py system then maps these descriptions to the enactment platforms, such as MPI, Storm, multiprocessing. It provides a mechanism which allows users to determine the provenance information to be collected and to analyze it at runtime. For this work we consider alternative visualisation methods for provenance data, from infinite lists and localised interactive graphs, to radial-views. The latter technique has been positively explored in many fields, from text data visualisation to genomics and social networking analysis. Its adoption for provenance has been presented in literature (Borkin et al. 2013) in the context of parent-child relationships across processes, constructed from control-flow information. Computer graphics research has focused on the advantage of this radial distribution of interlinked information and on ways to improve the visual efficiency and tunability of such representations, like the Hierarchical Edge Bundles visualisation method, (Holten et al. 2006), which aims at reducing visual clutter of highly connected structures via the generation of bundles. Our approach explores the potential of the combination of these methods. It serves environments where the size of the provenance collection, coupled with the diversity of the infrastructures and the domain metadata, make the extrapolation of usage trends extremely challenging. Applications of such visualisation systems can engage groups of scientists, data providers and computational engineers, by serving visual snapshots that highlight relationships between an item and its connected processes. We will present examples of comprehensive views on the distribution of processing and data transfers during a workflow's execution in HPC, as well as cross workflows interactions and internal dynamics. The latter in the context of faceted searches on domain metadata values-range. These are obtained from the analysis of real provenance data generated by the processing of seismic traces performed through the VERCE platform.

  11. Workflow and maintenance characteristics of five automated laboratory instruments for the diagnosis of sexually transmitted infections.

    PubMed

    Ratnam, Sam; Jang, Dan; Gilchrist, Jodi; Smieja, Marek; Poirier, Andre; Hatchette, Todd; Flandin, Jean-Frederic; Chernesky, Max

    2014-07-01

    The choice of a suitable automated system for a diagnostic laboratory depends on various factors. Comparative workflow studies provide quantifiable and objective metrics to determine hands-on time during specimen handling and processing, reagent preparation, return visits and maintenance, and test turnaround time and throughput. Using objective time study techniques, workflow characteristics for processing 96 and 192 tests were determined on m2000 RealTime (Abbott Molecular), Viper XTR (Becton Dickinson), cobas 4800 (Roche Molecular Diagnostics), Tigris (Hologic Gen-Probe), and Panther (Hologic Gen-Probe) platforms using second-generation assays for Chlamydia trachomatis and Neisseria gonorrhoeae. A combination of operational and maintenance steps requiring manual labor showed that Panther had the shortest overall hands-on times and Viper XTR the longest. Both Panther and Tigris showed greater efficiency whether 96 or 192 tests were processed. Viper XTR and Panther had the shortest times to results and m2000 RealTime the longest. Sample preparation and loading time was the shortest for Panther and longest for cobas 4800. Mandatory return visits were required only for m2000 RealTime and cobas 4800 when 96 tests were processed, and both required substantially more hands-on time than the other systems due to increased numbers of return visits when 192 tests were processed. These results show that there are substantial differences in the amount of labor required to operate each system. Assay performance, instrumentation, testing capacity, workflow, maintenance, and reagent costs should be considered in choosing a system. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  12. FY 1992-1993 RDT&E Descriptive Summaries: DARPA

    DTIC Science & Technology

    1991-02-01

    combining natural language and user workflow model information. * Determine effectiveness of auditory models as preprocessors for robust speech...for indexing and retrieving design knowledge. * Evaluate ability of message understanding systems to extract crisis -situation data from news wires...energy effects , underwater vehicles, neutrino detection, speech, tailored nuclear weapons, hypervelocity, nanosecond timing, and MAD/RPV. FY 1991 Planned

  13. Quantitative workflow based on NN for weighting criteria in landfill suitability mapping

    NASA Astrophysics Data System (ADS)

    Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Alkhasawneh, Mutasem Sh.; Aziz, Hamidi Abdul

    2017-10-01

    Our study aims to introduce a new quantitative workflow that integrates neural networks (NNs) and multi criteria decision analysis (MCDA). Existing MCDA workflows reveal a number of drawbacks, because of the reliance on human knowledge in the weighting stage. Thus, new workflow presented to form suitability maps at the regional scale for solid waste planning based on NNs. A feed-forward neural network employed in the workflow. A total of 34 criteria were pre-processed to establish the input dataset for NN modelling. The final learned network used to acquire the weights of the criteria. Accuracies of 95.2% and 93.2% achieved for the training dataset and testing dataset, respectively. The workflow was found to be capable of reducing human interference to generate highly reliable maps. The proposed workflow reveals the applicability of NN in generating landfill suitability maps and the feasibility of integrating them with existing MCDA workflows.

  14. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  15. Scientific Data Management (SDM) Center for Enabling Technologies. Final Report, 2007-2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ludascher, Bertram; Altintas, Ilkay

    Our contributions to advancing the State of the Art in scientific workflows have focused on the following areas: Workflow development; Generic workflow components and templates; Provenance collection and analysis; and, Workflow reliability and fault tolerance.

  16. Quantum chemical calculations and analysis of FTIR, FT-Raman and UV-Vis spectra of temozolomide molecule

    NASA Astrophysics Data System (ADS)

    Bhat, Sheeraz Ahmad; Ahmad, Shabbir

    2015-11-01

    A combined experimental and theoretical study of the structure, vibrational and electronic spectra of temozolomide molecule, which is largely used in the treatment of brain tumours, is presented. FTIR (4000-400 cm-1) and FT-Raman spectra (4000‒50 cm-1) have been recorded and analysed using anharmonic frequency calculations using VPT2, VSCF and CC-VSCF levels of theory within B3LYP/6-311++G(d,p) framework. Anharmonic methods give accurate frequencies of fundamental modes, overtones as well as Fermi resonances and account for coupling of different modes. The anharmonic frequencies calculated using VPT2 and CC-VSCF methods show better agreement with the experimental data. Harmonic frequencies including solvent effects are also computed using IEF-PCM model. The magnitudes of coupling between pair of modes have been calculated using coupling integral based on 2MR-QFF approximation. Intermolecular interactions are discussed for three possible dimers of temozolomide. UV-Vis spectrum, examined in ethanol solvent, is compared with the calculated spectrum at TD-DFT/6-311++G(d,p) level of theory. The electronic properties, such as excitation energy, frontier molecular orbital energies and the assignments of the absorption bands are also discussed.

  17. Antibody-based detection of protein phosphorylation status to track the efficacy of novel therapies using nanogram protein quantities from stem cells and cell lines.

    PubMed

    Aspinall-O'Dea, Mark; Pierce, Andrew; Pellicano, Francesca; Williamson, Andrew J; Scott, Mary T; Walker, Michael J; Holyoake, Tessa L; Whetton, Anthony D

    2015-01-01

    This protocol describes a highly reproducible antibody-based method that provides protein level and phosphorylation status information from nanogram quantities of protein cell lysate. Nanocapillary isoelectric focusing (cIEF) combines with UV-activated linking chemistry to detect changes in phosphorylation status. As an example application, we describe how to detect changes in response to tyrosine kinase inhibitors (TKIs) in the phosphorylation status of the adaptor protein CrkL, a major substrate of the oncogenic tyrosine kinase BCR-ABL in chronic myeloid leukemia (CML), using highly enriched CML stem cells and mature cell populations in vitro. This protocol provides a 2.5 pg/nl limit of protein detection (<0.2% of a stem cell sample containing <10(4) cells). Additional assays are described for phosphorylated tyrosine 207 (pTyr207)-CrkL and the protein tyrosine phosphatase PTPRC/CD45; these assays were developed using this protocol and applied to CML patient samples. This method is of high throughput, and it can act as a screen for in vitro cancer stem cell response to drugs and novel agents.

  18. Exploring the impact of an automated prescription-filling device on community pharmacy technician workflow

    PubMed Central

    Walsh, Kristin E.; Chui, Michelle Anne; Kieser, Mara A.; Williams, Staci M.; Sutter, Susan L.; Sutter, John G.

    2012-01-01

    Objective To explore community pharmacy technician workflow change after implementation of an automated robotic prescription-filling device. Methods At an independent community pharmacy in rural Mayville, WI, pharmacy technicians were observed before and 3 months after installation of an automated robotic prescription-filling device. The main outcome measures were sequences and timing of technician workflow steps, workflow interruptions, automation surprises, and workarounds. Results Of the 77 and 80 observations made before and 3 months after robot installation, respectively, 17 different workflow sequences were observed before installation and 38 after installation. Average prescription filling time was reduced by 40 seconds per prescription with use of the robot. Workflow interruptions per observation increased from 1.49 to 1.79 (P = 0.11), and workarounds increased from 10% to 36% after robot use. Conclusion Although automated prescription-filling devices can increase efficiency, workflow interruptions and workarounds may negate that efficiency. Assessing changes in workflow and sequencing of tasks that may result from the use of automation can help uncover opportunities for workflow policy and procedure redesign. PMID:21896459

  19. Model Checking for Verification of Interactive Health IT Systems

    PubMed Central

    Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui

    2015-01-01

    Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166

  20. Progress in digital color workflow understanding in the International Color Consortium (ICC) Workflow WG

    NASA Astrophysics Data System (ADS)

    McCarthy, Ann

    2006-01-01

    The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.

  1. Automated Transition State Search and Its Application to Diverse Types of Organic Reactions.

    PubMed

    Jacobson, Leif D; Bochevarov, Art D; Watson, Mark A; Hughes, Thomas F; Rinaldo, David; Ehrlich, Stephan; Steinbrecher, Thomas B; Vaitheeswaran, S; Philipp, Dean M; Halls, Mathew D; Friesner, Richard A

    2017-11-14

    Transition state search is at the center of multiple types of computational chemical predictions related to mechanistic investigations, reactivity and regioselectivity predictions, and catalyst design. The process of finding transition states in practice is, however, a laborious multistep operation that requires significant user involvement. Here, we report a highly automated workflow designed to locate transition states for a given elementary reaction with minimal setup overhead. The only essential inputs required from the user are the structures of the separated reactants and products. The seamless workflow combining computational technologies from the fields of cheminformatics, molecular mechanics, and quantum chemistry automatically finds the most probable correspondence between the atoms in the reactants and the products, generates a transition state guess, launches a transition state search through a combined approach involving the relaxing string method and the quadratic synchronous transit, and finally validates the transition state via the analysis of the reactive chemical bonds and imaginary vibrational frequencies as well as by the intrinsic reaction coordinate method. Our approach does not target any specific reaction type, nor does it depend on training data; instead, it is meant to be of general applicability for a wide variety of reaction types. The workflow is highly flexible, permitting modifications such as a choice of accuracy, level of theory, basis set, or solvation treatment. Successfully located transition states can be used for setting up transition state guesses in related reactions, saving computational time and increasing the probability of success. The utility and performance of the method are demonstrated in applications to transition state searches in reactions typical for organic chemistry, medicinal chemistry, and homogeneous catalysis research. In particular, applications of our code to Michael additions, hydrogen abstractions, Diels-Alder cycloadditions, carbene insertions, and an enzyme reaction model involving a molybdenum complex are shown and discussed.

  2. Epiviz: a view inside the design of an integrated visual analysis software for genomics

    PubMed Central

    2015-01-01

    Background Computational and visual data analysis for genomics has traditionally involved a combination of tools and resources, of which the most ubiquitous consist of genome browsers, focused mainly on integrative visualization of large numbers of big datasets, and computational environments, focused on data modeling of a small number of moderately sized datasets. Workflows that involve the integration and exploration of multiple heterogeneous data sources, small and large, public and user specific have been poorly addressed by these tools. In our previous work, we introduced Epiviz, which bridges the gap between the two types of tools, simplifying these workflows. Results In this paper we expand on the design decisions behind Epiviz, and introduce a series of new advanced features that further support the type of interactive exploratory workflow we have targeted. We discuss three ways in which Epiviz advances the field of genomic data analysis: 1) it brings code to interactive visualizations at various different levels; 2) takes the first steps in the direction of collaborative data analysis by incorporating user plugins from source control providers, as well as by allowing analysis states to be shared among the scientific community; 3) combines established analysis features that have never before been available simultaneously in a genome browser. In our discussion section, we present security implications of the current design, as well as a series of limitations and future research steps. Conclusions Since many of the design choices of Epiviz are novel in genomics data analysis, this paper serves both as a document of our own approaches with lessons learned, as well as a start point for future efforts in the same direction for the genomics community. PMID:26328750

  3. [Isolation and determination of the seeds of Pachyrrhizus errosus protein by high performance gel filtration chromatography (GFC)].

    PubMed

    Wu, H; Hao, B; Tang, G; Lin, Y

    1997-03-01

    From the seeds of Pachyrrhizus errosus, three protein constituents, namel PE1, PE2 and PE3, have been isolated and purified by extraction with 5mmol/L phosphate saline (0.9% NaCl) buffer (PB) at pH 7.2, and S-Sepharose Fast Flow Column (2.6cm x 15cm) chromatography which eluted with 5mmol/L phosphate buffer (pH 7.0) containing 1mmol/L NaCl. Three proteins were burther separated on two connected Protein-Pak 60+Protein-Pak 125 [7.5mm x 39cm, 10microm] columns with mobile phase of 0.2mol/L phosphate buffer (pH 6.5). The flow rate was kept constant at 0.8mL/min by YSB-2 type high press pump. The effluent was monitored at a wavelength of 280nm on photodiode array detector. These three proteins are proved to be homogeneous by SDS-PAGE, IEF and HPGFC experiments, and all present the typical absorption spectra in ultraviolet region. The moleculer weights of the three proteins are approxiamtely 33000D, 14500D and 14000D respectively by SDS-PAGE. But as using HPGFC analysis, the MW value of PE2 is 28000D. This indicates PE2 may be composed of two chains joined by disulfide bond, which is further proved from the latter amino acid composition analysis. The isoelectric points of three proteins are 4.5, 6.5 and 7.5 respectively by using IEF. The amion acids compositions of the three proteins were determined with OPA post-column derivatization/fluorescence detection.

  4. Characterisation of phenol oxidase and peroxidase from maize silk.

    PubMed

    Sukalović, V Hadzi-Tasković; Veljović-Jovanović, S; Maksimović, J Dragisić; Maksimović, V; Pajić, Z

    2010-05-01

    Silk of some maize genotypes contains a high level of phenolics that undergo enzymatic oxidation to form quinones, which condense among themselves or with proteins to form brown pigments. Two phenolic oxidizing enzymes, peroxidase (POD; EC 1.11.1.7) and polyphenol oxidase (PPO; EC 1.10.3.1), from maize (Zea mays L.) silk were characterised with respect to their preferred substrate, different isoforms and specific effectors. One browning silk sample with high, and two non-browning samples with low phenolic content were investigated. Although POD oxidizes a wide range of phenolic substrates in vitro, its activity rate was independent of silk phenolic content. PPO activity, detected with o-diphenolic substrates, was abundant only in browning silk, and low or absent in non-browning silk. Pollination increased POD but not PPO activity. Isoelectric-focusing (IEF) and specific staining for POD and PPO showed a high degree of polymorphism that varied with silk origin. The IEF pattern of POD revealed a number of anionic and several cationic isoenzymes, with the most pronounced having neutral pI 7 and a basic isoform with pI 10. Detected isoforms of PPO were anionic, except for one neutral form found only in browning silk, and occupied positions different from those of POD. Different inhibitory effects of NaN(3), EDTA, KCN, and L-cysteine, as well as different impacts of a variety of cations on the oxidation of chlorogenic acid, mediated by PPO or POD, were detected. The findings are discussed in terms of a possible roles of these enzymes in defence and pollination.

  5. Sexual Functioning and Behavior of Men with Body Dysmorphic Disorder Concerning Penis Size Compared with Men Anxious about Penis Size and with Controls: A Cohort Study

    PubMed Central

    Veale, David; Miles, Sarah; Read, Julie; Troglia, Andrea; Wylie, Kevan; Muir, Gordon

    2015-01-01

    Introduction Little is known about the sexual functioning and behavior of men anxious about the size of their penis and the means that they might use to try to alter the size of their penis. Aim To compare sexual functioning and behavior in men with body dysmorphic disorder (BDD) concerning penis size and in men with small penis anxiety (SPA without BDD) and in a control group of men who do not have any concerns. Methods An opportunistic sample of 90 men from the community were recruited and divided into three groups: BDD (n = 26); SPA (n = 31) and controls (n = 33). Main Outcome Measures The Index of Erectile Function (IEF), sexual identity and history; and interventions to alter the size of their penis. Results Men with BDD compared with controls had reduced erectile dysfunction, orgasmic function, intercourse satisfaction and overall satisfaction on the IEF. Men with SPA compared with controls had reduced intercourse satisfaction. There were no differences in sexual desire, the frequency of intercourse or masturbation across any of the three groups. Men with BDD and SPA were more likely than the controls to attempt to alter the shape or size of their penis (for example jelqing, vacuum pumps or stretching devices) with poor reported success. Conclusion Men with BDD are more likely to have erectile dysfunction and less satisfaction with intercourse than controls but maintain their libido. Further research is required to develop and evaluate a psychological intervention for such men with adequate outcome measures. PMID:26468378

  6. The use of isoelectric focusing to identify rhinoceros keratins.

    PubMed

    Butler, D J; De Forest, P R; Kobilinsky, L

    1990-03-01

    Keratins represent the principal structural proteins of hair. They are also found in horn, nail, claw, hoof, and feather. Hair and nail samples from human and canine sources and hair samples from mule deer, white tail deer, cat, moose, elk, antelope, caribou, raccoon, and goat were studied. Parrot and goose feathers were also analyzed. Keratins are polymorphic, and species differences are known to exist. Proteinaceous extracts of deer and antelope antlers and bovine and rhinoceros horn were prepared by solubilizing 10 mg of horn sample in 200 microL of a solution containing 12M urea, 74mM Trizma base, and 78mM dithiothreitol (DTT). Extraction took place over a 48-h period. A 25-microL aliquot of extract was removed and incubated with 5 microL of 0.1 M DTT for 10 min at 25 degrees C. Keratins were then separated by isoelectric focusing (IEF) on 5.2% polyacrylamide gels for 3 h and visualized using silver staining. At least 20 bands could be observed for each species studied. However, band patterns differed in the position of each band, in the number of bands, and in band coloration resulting from the silver staining process. Horn from two species of rhinoceros was examined. For both specimens, most bands occurred in the pH range of 4 to 5. Although similar patterns for both species were observed, they differed sufficiently to differentiate one from the other. As might be expected, the closer two species are related phylogenetically, the greater the similarity in the IEF pattern produced from their solubilized keratin.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Studies on human fetal lens crystallins under oxidative stress and protective effects of tea polyphenols.

    PubMed

    Xiang, H; Pan, S; Li, S

    1998-09-01

    To investigate the oxidative modification of water-soluble crystallins of human fetal lens with H2O2 and fourteen metal ions with or without EDTA. Tea-polyphenols (TP) was added to above solutions in order to testing their antioxidative abilities. The experiments were performed at 37 degrees C with final concentration of 2.5 mg/ml protein, 0.1 mM metal ions, 0.3 mM EDTA and 1.0 mM H2O2. Then the TP was added to the solution with CuSO4 and H2O2, after 5 or 24 hours, the crystallins were analysed with SDS-PAGE and IEF. There were marked oxidative modifications of lens protein in H2O2 and copper without EDTA. In SDS-PAGE patterns, we found an increase in those species above of bands higher than 30 kD and some diffuse bands from 30 to 17 kD after 5 hours. In IEF patterns, there were a general increase on acidity with loss of the more basic species. When the TP was added, there was not any difference with control group. The results indicate that exposure of water-soluble protein to H2O2 and copper leads to covalent crosslinking and cleavage of polypeptides. After 24 hours, the development of the oxidative modifications of crystallins continues, comparison with the catalytic strength, copper ions stronger than the iron ions. On other hand, this work reported that the anti-oxidative action of TP is strong.

  8. House-dust mite allergy: mapping of Dermatophagoides pteronyssinus allergens for dogs by two-dimensional immunoblotting.

    PubMed

    Martins, Luís Miguel Lourenço; Marques, Andreia Grilo; Pereira, Luísa Maria Dotti Silva; Goicoa, Ana; Semião-Santos, Saul José; Bento, Ofélia Pereira

    2015-04-01

    Specific immunotherapy has shown to be very useful for allergy control in dogs, with a common success rate ranging from 65% to 70%. However, this efficacy could probably be improved and the identification of individual allergomes, with the choice of more adequate molecular allergen pools for specific immunotherapy, being the strategy. To map Dermatophagoides pteronyssinus (Der p) allergens for mite-sensitized atopic dogs, for better understanding how individual allergograms may influence the response to house-dust mite immunotherapy. To identify the Der p mite allergome for dogs, 20 individuals allergic to dust-mites and sensitized to Der p, were selected. The extract from Der p was submitted to isoelectric focusing (IEF), one-dimensional (1-D) and two-dimensional (2-D) sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). Separated proteins were blotted onto polyvinylidene difluoride (PVDF) membranes and immunoblottings were performed with patient sera. Allergen-bound specific IgE was detected. Eleven allergens were identified from isoelectric focusing (IEF), as well as from 1-D SDS PAGE. From 2-D SDS-PAGE, 24 spots were identified. Several similarities were found between dog and human allergograms and no absolute correlation between sensitization and allergy was observed either. As in humans, different individual allergograms do not seem to implicate different clinical patterns, but may influence the response to specific immunotherapy. The molecular epidemiology approach in veterinary allergy management, by the characterization of individual patients' allergoms and by choosing the best molecular allergen pool for each patient could also improve the efficacy of allergy immunotherapy.

  9. Development of a reinforced electrochemically aligned collagen bioscaffold for tendon tissue engineering applications

    NASA Astrophysics Data System (ADS)

    Uquillas Paredes, Jorge Alfredo

    Type-I collagen is a promising biomaterial that can be used to synthesize bioscaffolds as a strategy to regenerate and repair damaged tendons. The existing in vitro prepared collagen bioscaffolds are in the form of gels, foams, or extruded fibers. These bioscaffolds readily present sites for attachment of biological factors and cells; however, they have extremely poor biomechanical properties in comparison to the properties of native tendons. The biomechanical function of type-I collagen bioscaffolds needs to be elevated to the level of natural tissues for this biomaterial to replace mechanically challenged tendons in a functionally meaningful way. The overall goal of this dissertation is to develop a reinforced electrochemically aligned collagenous bioscaffold for applications in tendon tissue engineering. The bioscaffold is synthesized by a unique electrochemical process via isoelectric focusing (IEF) to attain a very high degree of molecular alignment and packing density. This dissertation presents progress made on four aims: A) development of simple and descriptive electrochemical theory via the mathematical model of IEF and the forces acting on collagen alignment under an electric field; B) optimization of the post-alignment PBS treatment step to achieve d- banding pattern in uncrosslinked electrochemically aligned collagen (ELAC) bioscaffolds; C) optimization of the best crosslinking protocol to produce the strongest possible ELAC biomaterial with excellent cellular compatibility; and D) in vivo evaluation of the biocompatibility and biodegradability properties of electronically aligned collagen bioscaffolds. The results of this dissertation provide strong evidence showing that reinforced ELAC bioscaffolds could be used clinically in the future to repair damaged tendons.

  10. Sexual Functioning and Behavior of Men with Body Dysmorphic Disorder Concerning Penis Size Compared with Men Anxious about Penis Size and with Controls: A Cohort Study.

    PubMed

    Veale, David; Miles, Sarah; Read, Julie; Troglia, Andrea; Wylie, Kevan; Muir, Gordon

    2015-09-01

    Little is known about the sexual functioning and behavior of men anxious about the size of their penis and the means that they might use to try to alter the size of their penis. To compare sexual functioning and behavior in men with body dysmorphic disorder (BDD) concerning penis size and in men with small penis anxiety (SPA without BDD) and in a control group of men who do not have any concerns. An opportunistic sample of 90 men from the community were recruited and divided into three groups: BDD (n = 26); SPA (n = 31) and controls (n = 33). The Index of Erectile Function (IEF), sexual identity and history; and interventions to alter the size of their penis. Men with BDD compared with controls had reduced erectile dysfunction, orgasmic function, intercourse satisfaction and overall satisfaction on the IEF. Men with SPA compared with controls had reduced intercourse satisfaction. There were no differences in sexual desire, the frequency of intercourse or masturbation across any of the three groups. Men with BDD and SPA were more likely than the controls to attempt to alter the shape or size of their penis (for example jelqing, vacuum pumps or stretching devices) with poor reported success. Men with BDD are more likely to have erectile dysfunction and less satisfaction with intercourse than controls but maintain their libido. Further research is required to develop and evaluate a psychological intervention for such men with adequate outcome measures.

  11. Radiology information system: a workflow-based approach.

    PubMed

    Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; van der Aalst, W M P

    2009-09-01

    Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare.

  12. Information Issues and Contexts that Impair Team Based Communication Workflow: A Palliative Sedation Case Study.

    PubMed

    Cornett, Alex; Kuziemsky, Craig

    2015-01-01

    Implementing team based workflows can be complex because of the scope of providers involved and the extent of information exchange and communication that needs to occur. While a workflow may represent the ideal structure of communication that needs to occur, information issues and contextual factors may impact how the workflow is implemented in practice. Understanding these issues will help us better design systems to support team based workflows. In this paper we use a case study of palliative sedation therapy (PST) to model a PST workflow and then use it to identify purposes of communication, information issues and contextual factors that impact them. We then suggest how our findings could inform health information technology (HIT) design to support team based communication workflows.

  13. A Workflow to Improve the Alignment of Prostate Imaging with Whole-mount Histopathology.

    PubMed

    Yamamoto, Hidekazu; Nir, Dror; Vyas, Lona; Chang, Richard T; Popert, Rick; Cahill, Declan; Challacombe, Ben; Dasgupta, Prokar; Chandra, Ashish

    2014-08-01

    Evaluation of prostate imaging tests against whole-mount histology specimens requires accurate alignment between radiologic and histologic data sets. Misalignment results in false-positive and -negative zones as assessed by imaging. We describe a workflow for three-dimensional alignment of prostate imaging data against whole-mount prostatectomy reference specimens and assess its performance against a standard workflow. Ethical approval was granted. Patients underwent motorized transrectal ultrasound (Prostate Histoscanning) to generate a three-dimensional image of the prostate before radical prostatectomy. The test workflow incorporated steps for axial alignment between imaging and histology, size adjustments following formalin fixation, and use of custom-made parallel cutters and digital caliper instruments. The control workflow comprised freehand cutting and assumed homogeneous block thicknesses at the same relative angles between pathology and imaging sections. Thirty radical prostatectomy specimens were histologically and radiologically processed, either by an alignment-optimized workflow (n = 20) or a control workflow (n = 10). The optimized workflow generated tissue blocks of heterogeneous thicknesses but with no significant drifting in the cutting plane. The control workflow resulted in significantly nonparallel blocks, accurately matching only one out of four histology blocks to their respective imaging data. The image-to-histology alignment accuracy was 20% greater in the optimized workflow (P < .0001), with higher sensitivity (85% vs. 69%) and specificity (94% vs. 73%) for margin prediction in a 5 × 5-mm grid analysis. A significantly better alignment was observed in the optimized workflow. Evaluation of prostate imaging biomarkers using whole-mount histology references should include a test-to-reference spatial alignment workflow. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  14. Conceptual-level workflow modeling of scientific experiments using NMR as a case study

    PubMed Central

    Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R

    2007-01-01

    Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870

  15. FAST: A fully asynchronous and status-tracking pattern for geoprocessing services orchestration

    NASA Astrophysics Data System (ADS)

    Wu, Huayi; You, Lan; Gui, Zhipeng; Gao, Shuang; Li, Zhenqiang; Yu, Jingmin

    2014-09-01

    Geoprocessing service orchestration (GSO) provides a unified and flexible way to implement cross-application, long-lived, and multi-step geoprocessing service workflows by coordinating geoprocessing services collaboratively. Usually, geoprocessing services and geoprocessing service workflows are data and/or computing intensive. The intensity feature may make the execution process of a workflow time-consuming. Since it initials an execution request without blocking other interactions on the client side, an asynchronous mechanism is especially appropriate for GSO workflows. Many critical problems remain to be solved in existing asynchronous patterns for GSO including difficulties in improving performance, status tracking, and clarifying the workflow structure. These problems are a challenge when orchestrating performance efficiency, making statuses instantly available, and constructing clearly structured GSO workflows. A Fully Asynchronous and Status-Tracking (FAST) pattern that adopts asynchronous interactions throughout the whole communication tier of a workflow is proposed for GSO. The proposed FAST pattern includes a mechanism that actively pushes the latest status to clients instantly and economically. An independent proxy was designed to isolate the status tracking logic from the geoprocessing business logic, which assists the formation of a clear GSO workflow structure. A workflow was implemented in the FAST pattern to simulate the flooding process in the Poyang Lake region. Experimental results show that the proposed FAST pattern can efficiently tackle data/computing intensive geoprocessing tasks. The performance of all collaborative partners was improved due to the asynchronous mechanism throughout communication tier. A status-tracking mechanism helps users retrieve the latest running status of a GSO workflow in an efficient and instant way. The clear structure of the GSO workflow lowers the barriers for geospatial domain experts and model designers to compose asynchronous GSO workflows. Most importantly, it provides better support for locating and diagnosing potential exceptions.

  16. Conceptual-level workflow modeling of scientific experiments using NMR as a case study.

    PubMed

    Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R

    2007-01-30

    Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.

  17. A recommended workflow methodology in the creation of an educational and training application incorporating a digital reconstruction of the cerebral ventricular system and cerebrospinal fluid circulation to aid anatomical understanding.

    PubMed

    Manson, Amy; Poyade, Matthieu; Rea, Paul

    2015-10-19

    The use of computer-aided learning in education can be advantageous, especially when interactive three-dimensional (3D) models are used to aid learning of complex 3D structures. The anatomy of the ventricular system of the brain is difficult to fully understand as it is seldom seen in 3D, as is the flow of cerebrospinal fluid (CSF). This article outlines a workflow for the creation of an interactive training tool for the cerebral ventricular system, an educationally challenging area of anatomy. This outline is based on the use of widely available computer software packages. Using MR images of the cerebral ventricular system and several widely available commercial and free software packages, the techniques of 3D modelling, texturing, sculpting, image editing and animations were combined to create a workflow in the creation of an interactive educational and training tool. This was focussed on cerebral ventricular system anatomy, and the flow of cerebrospinal fluid. We have successfully created a robust methodology by using key software packages in the creation of an interactive education and training tool. This has resulted in an application being developed which details the anatomy of the ventricular system, and flow of cerebrospinal fluid using an anatomically accurate 3D model. In addition to this, our established workflow pattern presented here also shows how tutorials, animations and self-assessment tools can also be embedded into the training application. Through our creation of an established workflow in the generation of educational and training material for demonstrating cerebral ventricular anatomy and flow of cerebrospinal fluid, it has enormous potential to be adopted into student training in this field. With the digital age advancing rapidly, this has the potential to be used as an innovative tool alongside other methodologies for the training of future healthcare practitioners and scientists. This workflow could be used in the creation of other tools, which could be developed for use not only on desktop and laptop computers but also smartphones, tablets and fully immersive stereoscopic environments. It also could form the basis on which to build surgical simulations enhanced with haptic interaction.

  18. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS

    PubMed Central

    Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2016-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971

  19. RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS.

    PubMed

    Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz

    2017-01-01

    As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.

  20. The future of scientific workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Peterka, Tom; Altintas, Ilkay

    Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on thosemore » workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.« less

  1. Development of a user customizable imaging informatics-based intelligent workflow engine system to enhance rehabilitation clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Martinez, Clarisa; Wang, Jing; Liu, Ye; Liu, Brent

    2014-03-01

    Clinical trials usually have a demand to collect, track and analyze multimedia data according to the workflow. Currently, the clinical trial data management requirements are normally addressed with custom-built systems. Challenges occur in the workflow design within different trials. The traditional pre-defined custom-built system is usually limited to a specific clinical trial and normally requires time-consuming and resource-intensive software development. To provide a solution, we present a user customizable imaging informatics-based intelligent workflow engine system for managing stroke rehabilitation clinical trials with intelligent workflow. The intelligent workflow engine provides flexibility in building and tailoring the workflow in various stages of clinical trials. By providing a solution to tailor and automate the workflow, the system will save time and reduce errors for clinical trials. Although our system is designed for clinical trials for rehabilitation, it may be extended to other imaging based clinical trials as well.

  2. The standard-based open workflow system in GeoBrain (Invited)

    NASA Astrophysics Data System (ADS)

    Di, L.; Yu, G.; Zhao, P.; Deng, M.

    2013-12-01

    GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.

  3. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  4. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  5. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    DOE PAGES

    Malawski, Maciej; Figiela, Kamil; Bubak, Marian; ...

    2015-01-01

    This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize themore » cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.« less

  6. Using EHR audit trail logs to analyze clinical workflow: A case study from community-based ambulatory clinics.

    PubMed

    Wu, Danny T Y; Smart, Nikolas; Ciemins, Elizabeth L; Lanham, Holly J; Lindberg, Curt; Zheng, Kai

    2017-01-01

    To develop a workflow-supported clinical documentation system, it is a critical first step to understand clinical workflow. While Time and Motion studies has been regarded as the gold standard of workflow analysis, this method can be resource consuming and its data may be biased due to the cognitive limitation of human observers. In this study, we aimed to evaluate the feasibility and validity of using EHR audit trail logs to analyze clinical workflow. Specifically, we compared three known workflow changes from our previous study with the corresponding EHR audit trail logs of the study participants. The results showed that EHR audit trail logs can be a valid source for clinical workflow analysis, and can provide an objective view of clinicians' behaviors, multi-dimensional comparisons, and a highly extensible analysis framework.

  7. Observing health professionals' workflow patterns for diabetes care - First steps towards an ontology for EHR services.

    PubMed

    Schweitzer, M; Lasierra, N; Hoerbst, A

    2015-01-01

    Increasing the flexibility from a user-perspective and enabling a workflow based interaction, facilitates an easy user-friendly utilization of EHRs for healthcare professionals' daily work. To offer such versatile EHR-functionality, our approach is based on the execution of clinical workflows by means of a composition of semantic web-services. The backbone of such architecture is an ontology which enables to represent clinical workflows and facilitates the selection of suitable services. In this paper we present the methods and results after running observations of diabetes routine consultations which were conducted in order to identify those workflows and the relation among the included tasks. Mentioned workflows were first modeled by BPMN and then generalized. As a following step in our study, interviews will be conducted with clinical personnel to validate modeled workflows.

  8. Non-targeted workflow for identification of antimicrobial compounds in animal feed using bioassay-directed screening in combination with liquid chromatography-high resolution mass spectrometry.

    PubMed

    Wegh, Robin S; Berendsen, Bjorn J A; Driessen-Van Lankveld, Wilma D M; Pikkemaat, Mariël G; Zuidema, Tina; Van Ginkel, Leen A

    2017-11-01

    A non-targeted workflow is reported for the isolation and identification of antimicrobial active compounds using bioassay-directed screening and LC coupled to high-resolution MS. Suspect samples are extracted using a generic protocol and fractionated using two different LC conditions (A and B). The behaviour of the bioactive compound under these different conditions yields information about the physicochemical properties of the compound and introduces variations in co-eluting compounds in the fractions, which is essential for peak picking and identification. The fractions containing the active compound(s) obtained with conditions A and B are selected using a microbiological effect-based bioassay. The selected bioactive fractions from A and B are analysed using LC combined with high-resolution MS. Selection of relevant signals is automatically carried out by selecting all signals present in both bioactive fractions A and B, yielding tremendous data reduction. The method was assessed using two spiked feed samples and subsequently applied to two feed samples containing an unidentified compound showing microbial growth inhibition. In all cases, the identity of the compound causing microbiological inhibition was successfully confirmed.

  9. Design and implementation of an automated compound management system in support of lead optimization.

    PubMed

    Quintero, Catherine; Kariv, Ilona

    2009-06-01

    To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.

  10. Challenges in horizontal model integration.

    PubMed

    Kolczyk, Katrin; Conradi, Carsten

    2016-03-11

    Systems Biology has motivated dynamic models of important intracellular processes at the pathway level, for example, in signal transduction and cell cycle control. To answer important biomedical questions, however, one has to go beyond the study of isolated pathways towards the joint study of interacting signaling pathways or the joint study of signal transduction and cell cycle control. Thereby the reuse of established models is preferable, as it will generally reduce the modeling effort and increase the acceptance of the combined model in the field. Obtaining a combined model can be challenging, especially if the submodels are large and/or come from different working groups (as is generally the case, when models stored in established repositories are used). To support this task, we describe a semi-automatic workflow based on established software tools. In particular, two frequent challenges are described: identification of the overlap and subsequent (re)parameterization of the integrated model. The reparameterization step is crucial, if the goal is to obtain a model that can reproduce the data explained by the individual models. For demonstration purposes we apply our workflow to integrate two signaling pathways (EGF and NGF) from the BioModels Database.

  11. P7-S Combining Workflow-Based Project Organization with Protein-Dependant Data Retrieval for the Retrieval of Extensive Proteome Information

    PubMed Central

    Glandorf, J.; Thiele, H.; Macht, M.; Vorm, O.; Podtelejnikov, A.

    2007-01-01

    In the course of a full-scale proteomics experiment, the handling of the data as well as the retrieval of the relevant information from the results is a major challenge due to the massive amount of generated data (gel images, chromatograms, and spectra) as well as associated result information (sequences, literature, etc.). To obtain meaningful information from these data, one has to filter the results in an easy way. Possibilities to do so can be based on GO terms or structural features such as transmembrane domains, involvement in certain pathways, etc. In this presentation we will show how a combination of a software package with a workflow-based result organization (Bruker ProteinScape) and a protein-centered data-mining software (Proxeon ProteinCenter) can assist in the comparison of the results from large projects, such as comparison of cross-platform results from 2D PAGE/MS with shotgun LC-ESI-MS/MS. We will present differences between different technologies and show how these differences can be easily identified and how they allow us to draw conclusions on the involved technologies.

  12. Security aspects in teleradiology workflow

    NASA Astrophysics Data System (ADS)

    Soegner, Peter I.; Helweg, Gernot; Holzer, Heimo; zur Nedden, Dieter

    2000-05-01

    The medicolegal necessity of privacy, security and confidentiality was the aim of the attempt to develop a secure teleradiology workflow between the telepartners -- radiologist and the referring physician. To avoid the lack of dataprotection and datasecurity we introduced biometric fingerprint scanners in combination with smart cards to identify the teleradiology partners and communicated over an encrypted TCP/IP satellite link between Innsbruck and Reutte. We used an asymmetric kryptography method to guarantee authentification, integrity of the data-packages and confidentiality of the medical data. It was necessary to use a biometric feature to avoid a case of mistaken identity of persons, who wanted access to the system. Only an invariable electronical identification allowed a legal liability to the final report and only a secure dataconnection allowed the exchange of sensible medical data between different partners of Health Care Networks. In our study we selected the user friendly combination of a smart card and a biometric fingerprint technique, called SkymedTM Double Guard Secure Keyboard (Agfa-Gevaert) to confirm identities and log into the imaging workstations and the electronic patient record. We examined the interoperability of the used software with the existing platforms. Only the WIN-XX operating systems could be protected at the time of our study.

  13. Toward Streamlined Identification of Dioxin-like Compounds in Environmental Samples through Integration of Suspension Bioassay.

    PubMed

    Xiao, Hongxia; Brinkmann, Markus; Thalmann, Beat; Schiwy, Andreas; Große Brinkhaus, Sigrid; Achten, Christine; Eichbaum, Kathrin; Gembé, Carolin; Seiler, Thomas-Benjamin; Hollert, Henner

    2017-03-21

    Effect-directed analysis (EDA) is a powerful strategy to identify biologically active compounds in environmental samples. However, in current EDA studies, fractionation and handling procedures are laborious, consist of multiple evaporation steps, and thus bear the risk of contamination and decreased recoveries of the target compounds. The low resulting throughput has been one of the major bottlenecks of EDA. Here, we propose a high-throughput EDA (HT-EDA) work-flow combining reversed phase high-performance liquid chromatography fractionation of samples into 96-well microplates, followed by toxicity assessment in the micro-EROD bioassay with the wild-type rat hepatoma H4IIE cells, and chemical analysis of bioactive fractions. The approach was evaluated using single substances, binary mixtures, and extracts of sediment samples collected at the Three Gorges Reservoir, Yangtze River, China, as well as the rivers Rhine and Elbe, Germany. Selected bioactive fractions were analyzed by highly sensitive gas chromatography-atmospheric pressure laser ionization-time-of-flight-mass spectrometry. In addition, we optimized the work-flow by seeding previously adapted suspension-cultured H4IIE cells directly into the microplate used for fractionation, which makes any transfers of fractionated samples unnecessary. The proposed HT-EDA work-flow simplifies the procedure for wider application in ecotoxicology and environmental routine programs.

  14. Design and implementation of workflow engine for service-oriented architecture

    NASA Astrophysics Data System (ADS)

    Peng, Shuqing; Duan, Huining; Chen, Deyun

    2009-04-01

    As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.

  15. A three-level atomicity model for decentralized workflow management systems

    NASA Astrophysics Data System (ADS)

    Ben-Shaul, Israel Z.; Heineman, George T.

    1996-12-01

    A workflow management system (WFMS) employs a workflow manager (WM) to execute and automate the various activities within a workflow. To protect the consistency of data, the WM encapsulates each activity with a transaction; a transaction manager (TM) then guarantees the atomicity of activities. Since workflows often group several activities together, the TM is responsible for guaranteeing the atomicity of these units. There are scalability issues, however, with centralized WFMSs. Decentralized WFMSs provide an architecture for multiple autonomous WFMSs to interoperate, thus accommodating multiple workflows and geographically-dispersed teams. When atomic units are composed of activities spread across multiple WFMSs, however, there is a conflict between global atomicity and local autonomy of each WFMS. This paper describes a decentralized atomicity model that enables workflow administrators to specify the scope of multi-site atomicity based upon the desired semantics of multi-site tasks in the decentralized WFMS. We describe an architecture that realizes our model and execution paradigm.

  16. Defense Science Study Group IV: Study Reports 1994-1995. Volume I

    DTIC Science & Technology

    1996-02-01

    100_<- 9 800 70 _ 60- > ~50- P CD 20- 10 450 500 550 600 650 700 750 800 850 900 WAVELENGTH NANOMETEtS 01 GEN II +- EARLY GEN III 0 TYPICAL GEN III...lva•) 6.5 Somlval.ief1 Cd : 3.25 "sle Cd :. 1 Cd : 60 TI:3.25 .pglAdscm[ Pbt:I0 Pb:100 Cd.Pb40 Pb:is Low Volotie ",metas 0 (Am. Be. Cr. ON (Ilk As. Cr...mixed radioactive-hazardous wastes include steam reforming, wet air oxidation, and high pressure hydrothermal processing[11]. High pressure hydrothermal

  17. Isoelectric focusing of proteins and peptides

    NASA Technical Reports Server (NTRS)

    Egen, N.

    1979-01-01

    Egg-white solution was chosen as the reference solution in order to assess the effects of operational parameters (voltage, flow rate, ampholine pH range and concentration, and protein concentration) of the RIEF apparatus on protein resolution. Topics of discussion include: (1) comparison of RIEF apparatus to conventional IEF techniques (column and PAG) with respect to resolution and throughput; (2) peptide and protein separation (AHF, Thymosin - Fraction 5, vasoactive peptide, L-asparaginase and ACP); and (3) detection of peptides - dansyl derivatives of amino acids and peptides, post-focusing fluorescent labeling of amino acids, peptides and proteins, and ampholine extraction from focused gels.

  18. A comprehensive evaluation of popular proteomics software workflows for label-free proteome quantification and imputation.

    PubMed

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2017-05-31

    Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.

  19. A practical data processing workflow for multi-OMICS projects.

    PubMed

    Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin

    2014-01-01

    Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Wireless-PDA-controlled image workflow from PACS: the next trend in the health care enterprise?

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Documet, Jorge; Zhou, Michael Z.; Cao, Fei; Liu, Brent J.; Mogel, Greg T.; Huang, H. K.

    2003-05-01

    Image workflow in today's Picture Archiving and Communication Systems (PACS) is controlled from fixed Display Workstations (DW) using proprietary control interfaces. A remote access to the Hospital Information System (HIS) and Radiology Information System (RIS) for urgent patient information retrieval does not exist or gradually become available. The lack for remote access and workflow control for HIS and RIS is especially true when it comes to medical images of a PACS on Department or Hospital level. As images become more complex and data sizes expand rapidly with new image techniques like functional MRI, Mammography or routine spiral CT to name a few, the access and manageability becomes an important issue. Long image downloads or incomplete work lists cannot be tolerated in a busy health care environment. In addition, the domain of the PACS is no longer limited to the imaging department and PACS is also being used in the ER and emergency care units. Thus a prompt and secure access and manageability not only by the radiologist, but also from the physician becomes crucial to optimally utilize the PACS in the health care enterprise of the new millennium. The purpose of this paper is to introduce a concept and its implementation of a remote access and workflow control of the PACS combining wireless, Internet and Internet2 technologies. A wireless device, the Personal Digital Assistant (PDA), is used to communicate to a PACS web server that acts as a gateway controlling the commands for which the user has access to the PACS server. The commands implemented for this test-bed are query/retrieve of the patient list and study list including modality, examination, series and image selection and pushing any list items to a selected DW on the PACS network.

  1. A Tool Supporting Collaborative Data Analytics Workflow Design and Management

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Bao, Q.; Lee, T. J.

    2016-12-01

    Collaborative experiment design could significantly enhance the sharing and adoption of the data analytics algorithms and models emerged in Earth science. Existing data-oriented workflow tools, however, are not suitable to support collaborative design of such a workflow, to name a few, to support real-time co-design; to track how a workflow evolves over time based on changing designs contributed by multiple Earth scientists; and to capture and retrieve collaboration knowledge on workflow design (discussions that lead to a design). To address the aforementioned challenges, we have designed and developed a technique supporting collaborative data-oriented workflow composition and management, as a key component toward supporting big data collaboration through the Internet. Reproducibility and scalability are two major targets demanding fundamental infrastructural support. One outcome of the project os a software tool, supporting an elastic number of groups of Earth scientists to collaboratively design and compose data analytics workflows through the Internet. Instead of recreating the wheel, we have extended an existing workflow tool VisTrails into an online collaborative environment as a proof of concept.

  2. Metabolic modelling in the development of cell factories by synthetic biology

    PubMed Central

    Jouhten, Paula

    2012-01-01

    Cell factories are commonly microbial organisms utilized for bioconversion of renewable resources to bulk or high value chemicals. Introduction of novel production pathways in chassis strains is the core of the development of cell factories by synthetic biology. Synthetic biology aims to create novel biological functions and systems not found in nature by combining biology with engineering. The workflow of the development of novel cell factories with synthetic biology is ideally linear which will be attainable with the quantitative engineering approach, high-quality predictive models, and libraries of well-characterized parts. Different types of metabolic models, mathematical representations of metabolism and its components, enzymes and metabolites, are useful in particular phases of the synthetic biology workflow. In this minireview, the role of metabolic modelling in synthetic biology will be discussed with a review of current status of compatible methods and models for the in silico design and quantitative evaluation of a cell factory. PMID:24688669

  3. DOCKTITE-a highly versatile step-by-step workflow for covalent docking and virtual screening in the molecular operating environment.

    PubMed

    Scholz, Christoph; Knorr, Sabine; Hamacher, Kay; Schmidt, Boris

    2015-02-23

    The formation of a covalent bond with the target is essential for a number of successful drugs, yet tools for covalent docking without significant restrictions regarding warhead or receptor classes are rare and limited in use. In this work we present DOCKTITE, a highly versatile workflow for covalent docking in the Molecular Operating Environment (MOE) combining automated warhead screening, nucleophilic side chain attachment, pharmacophore-based docking, and a novel consensus scoring approach. The comprehensive validation study includes pose predictions of 35 protein/ligand complexes which resulted in a mean RMSD of 1.74 Å and a prediction rate of 71.4% with an RMSD below 2 Å, a virtual screening with an area under the curve (AUC) for the receiver operating characteristics (ROC) of 0.81, and a significant correlation between predicted and experimental binding affinities (ρ = 0.806, R(2) = 0.649, p < 0.005).

  4. Generation of genome-modified Drosophila cell lines using SwAP.

    PubMed

    Franz, Alexandra; Brunner, Erich; Basler, Konrad

    2017-10-02

    The ease of generating genetically modified animals and cell lines has been markedly increased by the recent development of the versatile CRISPR/Cas9 tool. However, while the isolation of isogenic cell populations is usually straightforward for mammalian cell lines, the generation of clonal Drosophila cell lines has remained a longstanding challenge, hampered by the difficulty of getting Drosophila cells to grow at low densities. Here, we describe a highly efficient workflow to generate clonal Cas9-engineered Drosophila cell lines using a combination of cell pools, limiting dilution in conditioned medium and PCR with allele-specific primers, enabling the efficient selection of a clonal cell line with a suitable mutation profile. We validate the protocol by documenting the isolation, selection and verification of eight independently Cas9-edited armadillo mutant Drosophila cell lines. Our method provides a powerful and simple workflow that improves the utility of Drosophila cells for genetic studies with CRISPR/Cas9.

  5. A healthcare Lean Six Sigma System for postanesthesia care unit workflow improvement.

    PubMed

    Kuo, Alex Mu-Hsing; Borycki, Elizabeth; Kushniruk, Andre; Lee, Te-Shu

    2011-01-01

    The aim of this article is to propose a new model called Healthcare Lean Six Sigma System that integrates Lean and Six Sigma methodologies to improve workflow in a postanesthesia care unit. The methodology of the proposed model is fully described. A postanesthesia care unit case study is also used to demonstrate the benefits of using the Healthcare Lean Six Sigma System model by combining Lean and Six Sigma methodologies together. The new model bridges the service gaps between health care providers and patients, balances the requirements of health care managers, and delivers health care services to patients by taking the benefits of the Lean speed and Six Sigma high-quality principles. The full benefits of the new model will be realized when applied at both strategic and operational levels. For further research, we will examine how the proposed model is used in different real-world case studies.

  6. New Tools For Understanding Microbial Diversity Using High-throughput Sequence Data

    NASA Astrophysics Data System (ADS)

    Knight, R.; Hamady, M.; Liu, Z.; Lozupone, C.

    2007-12-01

    High-throughput sequencing techniques such as 454 are straining the limits of tools traditionally used to build trees, choose OTUs, and perform other essential sequencing tasks. We have developed a workflow for phylogenetic analysis of large-scale sequence data sets that combines existing tools, such as the Arb phylogeny package and the NAST multiple sequence alignment tool, with new methods for choosing and clustering OTUs and for performing phylogenetic community analysis with UniFrac. This talk discusses the cyberinfrastructure we are developing to support the human microbiome project, and the application of these workflows to analyze very large data sets that contrast the gut microbiota with a range of physical environments. These tools will ultimately help to define core and peripheral microbiomes in a range of environments, and will allow us to understand the physical and biotic factors that contribute most to differences in microbial diversity.

  7. Safety and feasibility of STAT RAD: Improvement of a novel rapid tomotherapy-based radiation therapy workflow by failure mode and effects analysis.

    PubMed

    Jones, Ryan T; Handsfield, Lydia; Read, Paul W; Wilson, David D; Van Ausdal, Ray; Schlesinger, David J; Siebers, Jeffrey V; Chen, Quan

    2015-01-01

    The clinical challenge of radiation therapy (RT) for painful bone metastases requires clinicians to consider both treatment efficacy and patient prognosis when selecting a radiation therapy regimen. The traditional RT workflow requires several weeks for common palliative RT schedules of 30 Gy in 10 fractions or 20 Gy in 5 fractions. At our institution, we have created a new RT workflow termed "STAT RAD" that allows clinicians to perform computed tomographic (CT) simulation, planning, and highly conformal single fraction treatment delivery within 2 hours. In this study, we evaluate the safety and feasibility of the STAT RAD workflow. A failure mode and effects analysis (FMEA) was performed on the STAT RAD workflow, including development of a process map, identification of potential failure modes, description of the cause and effect, temporal occurrence, and team member involvement in each failure mode, and examination of existing safety controls. A risk probability number (RPN) was calculated for each failure mode. As necessary, workflow adjustments were then made to safeguard failure modes of significant RPN values. After workflow alterations, RPN numbers were again recomputed. A total of 72 potential failure modes were identified in the pre-FMEA STAT RAD workflow, of which 22 met the RPN threshold for clinical significance. Workflow adjustments included the addition of a team member checklist, changing simulation from megavoltage CT to kilovoltage CT, alteration of patient-specific quality assurance testing, and allocating increased time for critical workflow steps. After these modifications, only 1 failure mode maintained RPN significance; patient motion after alignment or during treatment. Performing the FMEA for the STAT RAD workflow before clinical implementation has significantly strengthened the safety and feasibility of STAT RAD. The FMEA proved a valuable evaluation tool, identifying potential problem areas so that we could create a safer workflow. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  8. Virtual Geophysics Laboratory: Exploiting the Cloud and Empowering Geophysicsts

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Vote, Josh; Goh, Richard; Cox, Simon

    2013-04-01

    Over the last five decades geoscientists from Australian state and federal agencies have collected and assembled around 3 Petabytes of geoscience data sets under public funding. As a consequence of technological progress, data is now being acquired at exponential rates and in higher resolution than ever before. Effective use of these big data sets challenges the storage and computational infrastructure of most organizations. The Virtual Geophysics Laboratory (VGL) is a scientific workflow portal addresses some of the resulting issues by providing Australian geophysicists with access to a Web 2.0 or Rich Internet Application (RIA) based integrated environment that exploits eResearch tools and Cloud computing technology, and promotes collaboration between the user community. VGL simplifies and automates large portions of what were previously manually intensive scientific workflow processes, allowing scientists to focus on the natural science problems, rather than computer science and IT. A number of geophysical processing codes are incorporated to support multiple workflows. For example a gravity inversion can be performed by combining the Escript/Finley codes (from the University of Queensland) with the gravity data registered in VGL. Likewise, tectonic processes can also be modeled by combining the Underworld code (from Monash University) with one of the various 3D models available to VGL. Cloud services provide scalable and cost effective compute resources. VGL is built on top of mature standards-compliant information services, many deployed using the Spatial Information Services Stack (SISS), which provides direct access to geophysical data. A large number of data sets from Geoscience Australia assist users in data discovery. GeoNetwork provides a metadata catalog to store workflow results for future use, discovery and provenance tracking. VGL has been developed in collaboration with the research community using incremental software development practices and open source tools. While developed to provide the geophysics research community with a sustainable platform and scalable infrastructure; VGL has also developed a number of concepts, patterns and generic components of which have been reused for cases beyond geophysics, including natural hazards, satellite processing and other areas requiring spatial data discovery and processing. Future plans for VGL include a number of improvements in both functional and non-functional areas in response to its user community needs and advancement in information technologies. In particular, research is underway in the following areas (a) distributed and parallel workflow processing in the cloud, (b) seamless integration with various cloud providers, and (c) integration with virtual laboratories representing other science domains. Acknowledgements: VGL was developed by CSIRO in collaboration with Geoscience Australia, National Computational Infrastructure, Australia National University, Monash University and University of Queensland, and has been supported by the Australian Government's Education Investment Funds through NeCTAR.

  9. 76 FR 71928 - Defense Federal Acquisition Regulation Supplement; Updates to Wide Area WorkFlow (DFARS Case 2011...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-21

    ... Defense Federal Acquisition Regulation Supplement; Updates to Wide Area WorkFlow (DFARS Case 2011-D027... Wide Area WorkFlow (WAWF) and TRICARE Encounter Data System (TEDS). WAWF, which electronically... civil emergencies, when access to Wide Area WorkFlow by those contractors is not feasible; (4) Purchases...

  10. An Auto-management Thesis Program WebMIS Based on Workflow

    NASA Astrophysics Data System (ADS)

    Chang, Li; Jie, Shi; Weibo, Zhong

    An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.

  11. Objectifying user critique. A means of continuous quality assurance for physician discharge letter composition.

    PubMed

    Oschem, M; Mahler, V; Prokosch, H U

    2011-01-01

    The aim of this study is to objectify user critique rendering it usable for quality assurance. Based on formative and summative evaluation results we strive to promote software improvements; in our case, the physician discharge letter composition process at the Department of Dermatology, University Hospital Erlangen, Germany. We developed a novel six-step approach to objectify user critique: 1) acquisition of user critique using subjectivist methods, 2) creation of a workflow model, 3) definition of hypothesis and indicators, 4) measuring of indicators, 5) analyzing results, 6) optimization of the system regarding both subjectivist and objectivist evaluation results. In particular, we derived indicators and workflows directly from user critique/narratives. The identified indicators were mapped onto workflow activities, creating a link between user critique and the evaluated system. Users criticized a new discharge letter system as "too slow" and "too labor-intensive" in comparison with the previously used system. In a stepwise approach we collected subjective user critique, derived a comprehensive process model including deviations and deduced a set of five indicators for objectivist evaluation: processing time, system-related waiting time, number of mouse clicks, number of keyboard inputs, and throughput time. About 3500 measurements have been performed to compare the workflow-steps of both systems, regarding 20 discharge letters. Although the difference of the mean total processing time between both systems was statistically insignificant (2011.7 s vs. 1971.5 s; p = 0.457), we detected a significant difference in waiting times (101.8 s vs. 37.2 s; p <0.001) and number of user interactions (77 vs. 69; p <0.001) in favor of the old system, thus objectifying user critique. Our six-step approach enables objectification of user critique, resulting in objective values for continuous quality assurance. To our knowledge no previous study in medical informatics mapped user critique onto workflow steps. Subjectivist analysis prompted us to use the indicator system-related waiting time for the objectivist study, which was rarely done before. We consider combining subjectivist and objectivist methods as a key point of our approach. Future work will concentrate on automated measurement of indicators.

  12. A workflow for improving estimates of microplastic contamination in marine waters: A case study from North-Western Australia.

    PubMed

    Kroon, Frederieke; Motti, Cherie; Talbot, Sam; Sobral, Paula; Puotinen, Marji

    2018-07-01

    Plastic pollution is ubiquitous throughout the marine environment, with microplastic (i.e. <5 mm) contamination a global issue of emerging concern. The lack of universally accepted methods for quantifying microplastic contamination, including consistent application of microscopy, photography, an spectroscopy and photography, may result in unrealistic contamination estimates. Here, we present and apply an analysis workflow tailored to quantifying microplastic contamination in marine waters, incorporating stereomicroscopic visual sorting, microscopic photography and attenuated total reflectance (ATR) Fourier transform infrared (FTIR) spectroscopy. The workflow outlines step-by-step processing and associated decision making, thereby reducing bias in plastic identification and improving confidence in contamination estimates. Specific processing steps include (i) the use of a commercial algorithm-based comparison of particle spectra against an extensive commercially curated spectral library, followed by spectral interpretation to establish the chemical composition, (ii) a comparison against a customised contaminant spectral library to eliminate procedural contaminants, and (iii) final assignment of particles as either natural- or anthropogenic-derived materials, based on chemical type, a compare analysis of each particle against other particle spectra, and physical characteristics of particles. Applying this workflow to 54 tow samples collected in marine waters of North-Western Australia visually identified 248 potential anthropogenic particles. Subsequent ATR-FTIR spectroscopy, chemical assignment and visual re-inspection of photographs established 144 (58%) particles to be of anthropogenic origin. Of the original 248 particles, 97 (39%) were ultimately confirmed to be plastics, with 85 of these (34%) classified as microplastics, demonstrating that over 60% of particles may be misidentified as plastics if visual identification is not complemented by spectroscopy. Combined, this tailored analysis workflow outlines a consistent and sequential process to quantify contamination by microplastics and other anthropogenic microparticles in marine waters. Importantly, its application will contribute to more realistic estimates of microplastic contamination in marine waters, informing both ecological risk assessments and experimental concentrations in effect studies. Copyright © 2018 Australian Institute of Marine Science. Published by Elsevier Ltd.. All rights reserved.

  13. Relating interesting quantitative time series patterns with text events and text features

    NASA Astrophysics Data System (ADS)

    Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.

    2013-12-01

    In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.

  14. Structuring clinical workflows for diabetes care: an overview of the OntoHealth approach.

    PubMed

    Schweitzer, M; Lasierra, N; Oberbichler, S; Toma, I; Fensel, A; Hoerbst, A

    2014-01-01

    Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view.

  15. Structuring Clinical Workflows for Diabetes Care

    PubMed Central

    Lasierra, N.; Oberbichler, S.; Toma, I.; Fensel, A.; Hoerbst, A.

    2014-01-01

    Summary Background Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. Objectives The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. Methods A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. Results This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. Conclusions The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view. PMID:25024765

  16. A simple and unsupervised semi-automatic workflow to detect shallow landslides in Alpine areas based on VHR remote sensing data

    NASA Astrophysics Data System (ADS)

    Amato, Gabriele; Eisank, Clemens; Albrecht, Florian

    2017-04-01

    Landslide detection from Earth observation imagery is an important preliminary work for landslide mapping, landslide inventories and landslide hazard assessment. In this context, the object-based image analysis (OBIA) concept has been increasingly used over the last decade. Within the framework of the Land@Slide project (Earth observation based landslide mapping: from methodological developments to automated web-based information delivery) a simple, unsupervised, semi-automatic and object-based approach for the detection of shallow landslides has been developed and implemented in the InterIMAGE open-source software. The method was applied to an Alpine case study in western Austria, exploiting spectral information from pansharpened 4-bands WorldView-2 satellite imagery (0.5 m spatial resolution) in combination with digital elevation models. First, we divided the image into sub-images, i.e. tiles, and then we applied the workflow to each of them without changing the parameters. The workflow was implemented as top-down approach: at the image tile level, an over-classification of the potential landslide area was produced; the over-estimated area was re-segmented and re-classified by several processing cycles until most false positive objects have been eliminated. In every step a Baatz algorithm based segmentation generates polygons "candidates" to be landslides. At the same time, the average values of normalized difference vegetation index (NDVI) and brightness are calculated for these polygons; after that, these values are used as thresholds to perform an objects selection in order to improve the quality of the classification results. In combination, also empirically determined values of slope and roughness are used in the selection process. Results for each tile were merged to obtain the landslide map for the test area. For final validation, the landslide map was compared to a geological map and a supervised landslide classification in order to estimate its accuracy. Results for the test area showed that the proposed method is capable of accurately distinguishing landslides from roofs and trees. Implementation of the workflow into InterIMAGE was straightforward. We conclude that the method is able to extract landslides in forested areas, but that there is still room for improvements concerning the extraction in non-forested high-alpine regions.

  17. myExperiment: a repository and social network for the sharing of bioinformatics workflows

    PubMed Central

    Goble, Carole A.; Bhagat, Jiten; Aleksejevs, Sergejs; Cruickshank, Don; Michaelides, Danius; Newman, David; Borkum, Mark; Bechhofer, Sean; Roos, Marco; Li, Peter; De Roure, David

    2010-01-01

    myExperiment (http://www.myexperiment.org) is an online research environment that supports the social sharing of bioinformatics workflows. These workflows are procedures consisting of a series of computational tasks using web services, which may be performed on data from its retrieval, integration and analysis, to the visualization of the results. As a public repository of workflows, myExperiment allows anybody to discover those that are relevant to their research, which can then be reused and repurposed to their specific requirements. Conversely, developers can submit their workflows to myExperiment and enable them to be shared in a secure manner. Since its release in 2007, myExperiment currently has over 3500 registered users and contains more than 1000 workflows. The social aspect to the sharing of these workflows is facilitated by registered users forming virtual communities bound together by a common interest or research project. Contributors of workflows can build their reputation within these communities by receiving feedback and credit from individuals who reuse their work. Further documentation about myExperiment including its REST web service is available from http://wiki.myexperiment.org. Feedback and requests for support can be sent to bugs@myexperiment.org. PMID:20501605

  18. Traversing the many paths of workflow research: developing a conceptual framework of workflow terminology through a systematic literature review

    PubMed Central

    Novak, Laurie L; Johnson, Kevin B; Lorenzi, Nancy M

    2010-01-01

    The objective of this review was to describe methods used to study and model workflow. The authors included studies set in a variety of industries using qualitative, quantitative and mixed methods. Of the 6221 matching abstracts, 127 articles were included in the final corpus. The authors collected data from each article on researcher perspective, study type, methods type, specific methods, approaches to evaluating quality of results, definition of workflow and dependent variables. Ethnographic observation and interviews were the most frequently used methods. Long study durations revealed the large time commitment required for descriptive workflow research. The most frequently discussed technique for evaluating quality of study results was triangulation. The definition of the term “workflow” and choice of methods for studying workflow varied widely across research areas and researcher perspectives. The authors developed a conceptual framework of workflow-related terminology for use in future research and present this model for use by other researchers. PMID:20442143

  19. Allergy to grass pollen: mapping of Dactylis glomerata and Phleum pratense allergens for dogs by two-dimensional immunoblotting

    PubMed Central

    Marques, Andreia Grilo; Pereira, Luísa Maria Dotti Silva; Semião-Santos, Saul José; Bento, Ofélia Pereira

    2017-01-01

    Introduction Much less is known about grass-pollen allergens to dogs, when compared with humans. Genetic-based patterns might play an important role in sensitization profiles, conditioning the success of allergen-specific immunotherapy. Aim Mapping of Dactylis glomerata (D. glomerata) and Phleum pratense (P. pratense) allergens for grass pollen-sensitized atopic dogs, for better understanding how individual allergograms may influence the response to grass-pollen immunotherapy. Material and methods To identify D. glomerata and P. pratense allergoms for dogs, 15 individuals allergic to grass pollen and sensitized to D. glomerata and P. pratense were selected. D. glomerata and P. pratense proteomes were separated by isoelectric focusing (IEF), one-dimensional (1-D) and two-dimensional (2-D) sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). Separated proteins were blotted onto Polyvinylidene difluoride (PVDF) membranes and allergens were identified by patient sera IgE in Western Blotting (WB). Results In D. glomerata, 17 allergens were identified from IEF and 11 from 1-D SDS-PAGE, while from P. pratense, 18 and 6 allergens were identified, respectively. From 2-D SDS-PAGE 13 spots were identified from D. glomerata and 27 from P. pratense. Conclusions Several similarities were found between dog and human D. glomerata and P. pratense sensitization profiles but no relationship between clinical signs and a specific pattern of allergen recognition was observed. Similarities were found in each patient pattern of sensitization between D. glomerata and P. pratense, also suggesting cross-reactive phenomena. Further molecular epidemiology approach is needed to understand the role of the sensitization pattern in allergen-specific immunotherapy effectiveness in grass-pollen allergic dogs. PMID:28261033

  20. Rapid DNA extraction protocol for detection of alpha-1 antitrypsin deficiency from dried blood spots by real-time PCR.

    PubMed

    Struniawski, R; Szpechcinski, A; Poplawska, B; Skronski, M; Chorostowska-Wynimko, J

    2013-01-01

    The dried blood spot (DBS) specimens have been successfully employed for the large-scale diagnostics of α1-antitrypsin (AAT) deficiency as an easy to collect and transport alternative to plasma/serum. In the present study we propose a fast, efficient, and cost effective protocol of DNA extraction from dried blood spot (DBS) samples that provides sufficient quantity and quality of DNA and effectively eliminates any natural PCR inhibitors, allowing for successful AAT genotyping by real-time PCR and direct sequencing. DNA extracted from 84 DBS samples from chronic obstructive pulmonary disease patients was genotyped for AAT deficiency variants by real-time PCR. The results of DBS AAT genotyping were validated by serum IEF phenotyping and AAT concentration measurement. The proposed protocol allowed successful DNA extraction from all analyzed DBS samples. Both quantity and quality of DNA were sufficient for further real-time PCR and, if necessary, for genetic sequence analysis. A 100% concordance between AAT DBS genotypes and serum phenotypes in positive detection of two major deficiency S- and Z- alleles was achieved. Both assays, DBS AAT genotyping by real-time PCR and serum AAT phenotyping by IEF, positively identified PI*S and PI*Z allele in 8 out of the 84 (9.5%) and 16 out of 84 (19.0%) patients, respectively. In conclusion, the proposed protocol noticeably reduces the costs and the hand-on-time of DBS samples preparation providing genomic DNA of sufficient quantity and quality for further real-time PCR or genetic sequence analysis. Consequently, it is ideally suited for large-scale AAT deficiency screening programs and should be method of choice.

  1. Cell wall-bound cationic and anionic class III isoperoxidases of pea root: biochemical characterization and function in root growth.

    PubMed

    Kukavica, Biljana M; Veljovicc-Jovanovicc, Sonja D; Menckhoff, Ljiljana; Lüthje, Sabine

    2012-07-01

    Cell wall isolated from pea roots was used to separate and characterize two fractions possessing class III peroxidase activity: (i) ionically bound proteins and (ii) covalently bound proteins. Modified SDS-PAGE separated peroxidase isoforms by their apparent molecular weights: four bands of 56, 46, 44, and 41kDa were found in the ionically bound fraction (iPOD) and one band (70kDa) was resolved after treatment of the cell wall with cellulase and pectinase (cPOD). Isoelectric focusing (IEF) patterns for iPODs and cPODs were significantly different: five iPODs with highly cationic pI (9.5-9.2) were detected, whereas the nine cPODs were anionic with pI values between pH 3.7 and 5. iPODs and cPODs showed rather specific substrate affinity and different sensitivity to inhibitors, heat, and deglycosylation treatments. Peroxidase and oxidase activities and their IEF patterns for both fractions were determined in different zones along the root and in roots of different ages. New iPODs with pI 9.34 and 9.5 were induced with root growth, while the activity of cPODs was more related to the formation of the cell wall in non-elongating tissue. Treatment with auxin that inhibits root growth led to suppression of iPOD and induction of cPOD. A similar effect was obtained with the widely used elicitor, chitosan, which also induced cPODs with pI 5.3 and 5.7, which may be specifically related to pathogen defence. The differences reported here between biochemical properties of cPOD and iPOD and their differential induction during development and under specific treatments implicate that they are involved in specific and different physiological processes.

  2. High Frequency of Hb E-Saskatoon (HBB: c.67G > A) in Brazilians: A New Genetic Origin?

    PubMed

    Wagner, Sandrine C; Lindenau, Juliana D; Castro, Simone M de; Santin, Ana Paula; Zaleski, Carina F; Azevedo, Laura A; Ribeiro Dos Santos, Ândrea K C; Dos Santos, Sidney E B; Hutz, Mara H

    2016-08-01

    Hb E-Saskatoon [β22(B4)Glu→Lys, HBB: c.67G > A] is a rare, nonpathological β-globin variant that was first described in a Canadian woman of Scottish and Dutch ancestry and has since then been detected in several populations. The aim of the present study was to identify the origin of Hb E-Saskatoon in Brazil using β-globin haplotypes and genetic ancestry in carriers of this hemoglobin (Hb) variant. Blood samples were investigated by isoelectric focusing (IEF) and high performance liquid chromatography (HPLC) using commercial kits. Hb E-Saskatoon was confirmed by amplification of the HBB gene, followed by sequence analysis. Haplotypes of the β-globin gene were determined by polymerase chain reaction (PCR), followed by digestion with specific restriction enzymes. Individual ancestry was estimated with 48 biallelic insertion/deletions using three 16-plex PCR amplifications. The IEF pattern was similar to Hbs C (HBB: c.19G > A) and Hb E (HBB: c.79G > A) [isoelectric point (pI): 7.59-7.65], and HPLC results showed an elution in the Hb S (HBB: c.20A > T) window [retention time (RT): 4.26-4.38]. DNA sequencing of the amplified β-globin gene showed a mutation at codon 22 (GAA>AAA) corresponding to Hb E-Saskatoon. A total of 11 cases of this variant were identified. In nine unrelated individuals, Hb E-Saskatoon was in linkage disequilibrium with haplotype 2 [+ - - - -]. All subjects showed a high degree of European contribution (mean = 0.85). Hb E-Saskatoon occurred on the β-globin gene of haplotype 2 in all Brazilian carriers. These findings suggest a different genetic origin for this Hb variant from that previously described.

  3. Analysis of the liver mitochondrial proteome in response to ethanol and S-adenosylmethionine treatments: novel molecular targets of disease and hepatoprotection.

    PubMed

    Andringa, Kelly K; King, Adrienne L; Eccleston, Heather B; Mantena, Sudheer K; Landar, Aimee; Jhala, Nirag C; Dickinson, Dale A; Squadrito, Giuseppe L; Bailey, Shannon M

    2010-05-01

    S-adenosylmethionine (SAM) minimizes alcohol hepatotoxicity; however, the molecular mechanisms responsible for SAM hepatoprotection remain unknown. Herein, we use proteomics to determine whether the hepatoprotective action of SAM against early-stage alcoholic liver disease is linked to alterations in the mitochondrial proteome. For this, male rats were fed control or ethanol-containing liquid diets +/- SAM and liver mitochondria were prepared for proteomic analysis. Two-dimensional isoelectric focusing (2D IEF/SDS-PAGE) and blue native gel electrophoresis (BN-PAGE) were used to determine changes in matrix and oxidative phosphorylation (OxPhos) proteins, respectively. SAM coadministration minimized alcohol-dependent inflammation and preserved mitochondrial respiration. SAM supplementation preserved liver SAM levels in ethanol-fed rats; however, mitochondrial SAM levels were increased by ethanol and SAM treatments. With use of 2D IEF/SDS-PAGE, 30 proteins showed significant changes in abundance in response to ethanol, SAM, or both. Classes of proteins affected by ethanol and SAM treatments were chaperones, beta oxidation proteins, sulfur metabolism proteins, and dehydrogenase enzymes involved in methionine, glycine, and choline metabolism. BN-PAGE revealed novel changes in the levels of 19 OxPhos proteins in response to ethanol, SAM, or both. Ethanol- and SAM-dependent alterations in the proteome were not linked to corresponding changes in gene expression. In conclusion, ethanol and SAM treatment led to multiple changes in the liver mitochondrial proteome. The protective effects of SAM against alcohol toxicity are mediated, in part, through maintenance of proteins involved in key mitochondrial energy conserving and biosynthetic pathways. This study demonstrates that SAM may be a promising candidate for treatment of alcoholic liver disease.

  4. [Purification and properties of Se-containing allophycocyanins from selenium rich Spirulina platensis].

    PubMed

    Huang, Zhi; Yang, Fang; Zheng, Wen-Jie

    2006-06-01

    Three Se-containing allophycocyanins (Se-APC) with high purity were purified from Se rich Spirulina platensis (Se-sp.) by hydroxyapatite chromatography, DEAE-52 anion-exchange chromatography and native gel preparative electrophoresis. Their biochemicial properties were explored by spectral scanning and electrophoresis analysis of Native-PAGE, SDS-PAGE and IEF on thin slab gel. Protein molecular weight (MW) of APC aggregation was determined by gel filter on Sephadex G-200 column. Se content of native and denatured Se-APC was detected by 2, 3-DAN fluorocence method. According to visible and fluorescence spectral character, three purified fractions of APC were identified to be APCI, APCII and APCIII. Native-PAGE and SDS-PAGE analysis revealed that they all shaped trimer (alphabeta) 3 of alpha and beta subunit with molecular mass of 18.3kDa and 15.7kDa, whereas APCI contains gamma subunit (about 32kDa) visibly and APCIII maybe contain the linker peptide of L(C)(8 - 10 kDa) based on their MW to be determined of 130.9, 98.1 and 106.30 kDa. IEF detection showed that the pl of Se-APCs was 4.76, 4.85 and 5.02 respectively. Se content of three purified Se-APCs were 316, 273 and 408 microg/g, which decreased about 25% after deaggregation treatment by 0.50 mol/L NaSCN and decreased more than 50% after denaturation treatment by 2-mercaptoethanol and reached to a steady content of 132 microg/g on average. These results indicated that Se incorporation into APC had no influence on function of energy transfer as well as biochemical property of APCs, and Se binding with APCs was highly relevant to its aggregation states whereas Se integrated steadily with its subunits.

  5. Disruption of Radiologist Workflow.

    PubMed

    Kansagra, Akash P; Liu, Kevin; Yu, John-Paul J

    2016-01-01

    The effect of disruptions has been studied extensively in surgery and emergency medicine, and a number of solutions-such as preoperative checklists-have been implemented to enforce the integrity of critical safety-related workflows. Disruptions of the highly complex and cognitively demanding workflow of modern clinical radiology have only recently attracted attention as a potential safety hazard. In this article, we describe the variety of disruptions that arise in the reading room environment, review approaches that other specialties have taken to mitigate workflow disruption, and suggest possible solutions for workflow improvement in radiology. Copyright © 2015 Mosby, Inc. All rights reserved.

  6. Workflow based framework for life science informatics.

    PubMed

    Tiwari, Abhishek; Sekhar, Arvind K T

    2007-10-01

    Workflow technology is a generic mechanism to integrate diverse types of available resources (databases, servers, software applications and different services) which facilitate knowledge exchange within traditionally divergent fields such as molecular biology, clinical research, computational science, physics, chemistry and statistics. Researchers can easily incorporate and access diverse, distributed tools and data to develop their own research protocols for scientific analysis. Application of workflow technology has been reported in areas like drug discovery, genomics, large-scale gene expression analysis, proteomics, and system biology. In this article, we have discussed the existing workflow systems and the trends in applications of workflow based systems.

  7. Workflows for microarray data processing in the Kepler environment.

    PubMed

    Stropp, Thomas; McPhillips, Timothy; Ludäscher, Bertram; Bieda, Mark

    2012-05-17

    Microarray data analysis has been the subject of extensive and ongoing pipeline development due to its complexity, the availability of several options at each analysis step, and the development of new analysis demands, including integration with new data sources. Bioinformatics pipelines are usually custom built for different applications, making them typically difficult to modify, extend and repurpose. Scientific workflow systems are intended to address these issues by providing general-purpose frameworks in which to develop and execute such pipelines. The Kepler workflow environment is a well-established system under continual development that is employed in several areas of scientific research. Kepler provides a flexible graphical interface, featuring clear display of parameter values, for design and modification of workflows. It has capabilities for developing novel computational components in the R, Python, and Java programming languages, all of which are widely used for bioinformatics algorithm development, along with capabilities for invoking external applications and using web services. We developed a series of fully functional bioinformatics pipelines addressing common tasks in microarray processing in the Kepler workflow environment. These pipelines consist of a set of tools for GFF file processing of NimbleGen chromatin immunoprecipitation on microarray (ChIP-chip) datasets and more comprehensive workflows for Affymetrix gene expression microarray bioinformatics and basic primer design for PCR experiments, which are often used to validate microarray results. Although functional in themselves, these workflows can be easily customized, extended, or repurposed to match the needs of specific projects and are designed to be a toolkit and starting point for specific applications. These workflows illustrate a workflow programming paradigm focusing on local resources (programs and data) and therefore are close to traditional shell scripting or R/BioConductor scripting approaches to pipeline design. Finally, we suggest that microarray data processing task workflows may provide a basis for future example-based comparison of different workflow systems. We provide a set of tools and complete workflows for microarray data analysis in the Kepler environment, which has the advantages of offering graphical, clear display of conceptual steps and parameters and the ability to easily integrate other resources such as remote data and web services.

  8. Dynamic reusable workflows for ocean science

    USGS Publications Warehouse

    Signell, Richard; Fernandez, Filipe; Wilcox, Kyle

    2016-01-01

    Digital catalogs of ocean data have been available for decades, but advances in standardized services and software for catalog search and data access make it now possible to create catalog-driven workflows that automate — end-to-end — data search, analysis and visualization of data from multiple distributed sources. Further, these workflows may be shared, reused and adapted with ease. Here we describe a workflow developed within the US Integrated Ocean Observing System (IOOS) which automates the skill-assessment of water temperature forecasts from multiple ocean forecast models, allowing improved forecast products to be delivered for an open water swim event. A series of Jupyter Notebooks are used to capture and document the end-to-end workflow using a collection of Python tools that facilitate working with standardized catalog and data services. The workflow first searches a catalog of metadata using the Open Geospatial Consortium (OGC) Catalog Service for the Web (CSW), then accesses data service endpoints found in the metadata records using the OGC Sensor Observation Service (SOS) for in situ sensor data and OPeNDAP services for remotely-sensed and model data. Skill metrics are computed and time series comparisons of forecast model and observed data are displayed interactively, leveraging the capabilities of modern web browsers. The resulting workflow not only solves a challenging specific problem, but highlights the benefits of dynamic, reusable workflows in general. These workflows adapt as new data enters the data system, facilitate reproducible science, provide templates from which new scientific workflows can be developed, and encourage data providers to use standardized services. As applied to the ocean swim event, the workflow exposed problems with two of the ocean forecast products which led to improved regional forecasts once errors were corrected. While the example is specific, the approach is general, and we hope to see increased use of dynamic notebooks across the geoscience domains.

  9. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.

  10. A versatile mathematical work-flow to explore how Cancer Stem Cell fate influences tumor progression.

    PubMed

    Fornari, Chiara; Balbo, Gianfranco; Halawani, Sami M; Ba-Rukab, Omar; Ahmad, Ab Rahman; Calogero, Raffaele A; Cordero, Francesca; Beccuti, Marco

    2015-01-01

    Nowadays multidisciplinary approaches combining mathematical models with experimental assays are becoming relevant for the study of biological systems. Indeed, in cancer research multidisciplinary approaches are successfully used to understand the crucial aspects implicated in tumor growth. In particular, the Cancer Stem Cell (CSC) biology represents an area particularly suited to be studied through multidisciplinary approaches, and modeling has significantly contributed to pinpoint the crucial aspects implicated in this theory. More generally, to acquire new insights on a biological system it is necessary to have an accurate description of the phenomenon, such that making accurate predictions on its future behaviors becomes more likely. In this context, the identification of the parameters influencing model dynamics can be advantageous to increase model accuracy and to provide hints in designing wet experiments. Different techniques, ranging from statistical methods to analytical studies, have been developed. Their applications depend on case-specific aspects, such as the availability and quality of experimental data, and the dimension of the parameter space. The study of a new model on the CSC-based tumor progression has been the motivation to design a new work-flow that helps to characterize possible system dynamics and to identify those parameters influencing such behaviors. In detail, we extended our recent model on CSC-dynamics creating a new system capable of describing tumor growth during the different stages of cancer progression. Indeed, tumor cells appear to progress through lineage stages like those of normal tissues, being their division auto-regulated by internal feedback mechanisms. These new features have introduced some non-linearities in the model, making it more difficult to be studied by solely analytical techniques. Our new work-flow, based on statistical methods, was used to identify the parameters which influence the tumor growth. The effectiveness of the presented work-flow was firstly verified on two well known models and then applied to investigate our extended CSC model. We propose a new work-flow to study in a practical and informative way complex systems, allowing an easy identification, interpretation, and visualization of the key model parameters. Our methodology is useful to investigate possible model behaviors and to establish factors driving model dynamics. Analyzing our new CSC model guided by the proposed work-flow, we found that the deregulation of CSC asymmetric proliferation contributes to cancer initiation, in accordance with several experimental evidences. Specifically, model results indicated that the probability of CSC symmetric proliferation is responsible of a switching-like behavior which discriminates between tumorigenesis and unsustainable tumor growth.

  11. Neurosurgical sapphire handheld probe for intraoperative optical diagnostics, laser coagulation and aspiration of malignant brain tissue

    NASA Astrophysics Data System (ADS)

    Shikunova, Irina A.; Zaytsev, Kirill I.; Stryukov, Dmitrii O.; Dubyanskaya, Evgenia N.; Kurlov, Vladimir N.

    2017-07-01

    In this paper, a handheld contact probe based on sapphire shaped crystal was developed for the intraoperative optical diagnosis and aspiration of malignant brain tissue combined with the laser hemostasis. Such a favorable combination of several functions in a single instrument significantly increases its clinical relevance. It makes possible highly-accurate real-time detection and removal of either large-scale malignancies or even separate invasive cancer cells. The proposed neuroprobe was integrated into the clinical neurosurgical workflow for the intraoperative fluorescence identification and removal of malignant tissues of the brain.

  12. Deploying and sharing U-Compare workflows as web services.

    PubMed

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  13. Deploying and sharing U-Compare workflows as web services

    PubMed Central

    2013-01-01

    Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017

  14. Health information exchange technology on the front lines of healthcare: workflow factors and patterns of use

    PubMed Central

    Johnson, Kevin B; Lorenzi, Nancy M

    2011-01-01

    Objective The goal of this study was to develop an in-depth understanding of how a health information exchange (HIE) fits into clinical workflow at multiple clinical sites. Materials and Methods The ethnographic qualitative study was conducted over a 9-month period in six emergency departments (ED) and eight ambulatory clinics in Memphis, Tennessee, USA. Data were collected using direct observation, informal interviews during observation, and formal semi-structured interviews. The authors observed for over 180 h, during which providers used the exchange 130 times. Results HIE-related workflow was modeled for each ED site and ambulatory clinic group and substantial site-to-site workflow differences were identified. Common patterns in HIE-related workflow were also identified across all sites, leading to the development of two role-based workflow models: nurse based and physician based. The workflow elements framework was applied to the two role-based patterns. An in-depth description was developed of how providers integrated HIE into existing clinical workflow, including prompts for HIE use. Discussion Workflow differed substantially among sites, but two general role-based HIE usage models were identified. Although providers used HIE to improve continuity of patient care, patient–provider trust played a significant role. Types of information retrieved related to roles, with nurses seeking to retrieve recent hospitalization data and more open-ended usage by nurse practitioners and physicians. User and role-specific customization to accommodate differences in workflow and information needs may increase the adoption and use of HIE. Conclusion Understanding end users' perspectives towards HIE technology is crucial to the long-term success of HIE. By applying qualitative methods, an in-depth understanding of HIE usage was developed. PMID:22003156

  15. Workflows in bioinformatics: meta-analysis and prototype implementation of a workflow generator.

    PubMed

    Garcia Castro, Alexander; Thoraval, Samuel; Garcia, Leyla J; Ragan, Mark A

    2005-04-07

    Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces) in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results) can be reproduced or shared among users. http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive), ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download). From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous analytical tools.

  16. Provenance-Powered Automatic Workflow Generation and Composition

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.

    2015-12-01

    In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.

  17. Combining the AFLOW GIBBS and elastic libraries to efficiently and robustly screen thermomechanical properties of solids

    NASA Astrophysics Data System (ADS)

    Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    2017-06-01

    Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.

  18. Identifying impact of software dependencies on replicability of biomedical workflows.

    PubMed

    Miksa, Tomasz; Rauber, Andreas; Mina, Eleni

    2016-12-01

    Complex data driven experiments form the basis of biomedical research. Recent findings warn that the context in which the software is run, that is the infrastructure and the third party dependencies, can have a crucial impact on the final results delivered by a computational experiment. This implies that in order to replicate the same result, not only the same data must be used, but also it must be run on an equivalent software stack. In this paper we present the VFramework that enables assessing replicability of workflows. It identifies whether any differences in software dependencies among two executions of the same workflow exist and whether they have impact on the produced results. We also conduct a case study in which we investigate the impact of software dependencies on replicability of Taverna workflows used in biomedical research of Huntington's disease. We re-execute analysed workflows in environments differing in operating system distribution and configuration. The results show that the VFramework can be used to identify the impact of software dependencies on the replicability of biomedical workflows. Furthermore, we observe that despite the fact that the workflows are executed in a controlled environment, they still depend on specific tools installed in the environment. The context model used by the VFramework improves the deficiencies of provenance traces and documents also such tools. Based on our findings we define guidelines for workflow owners that enable them to improve replicability of their workflows. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    PubMed

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-11-28

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. All local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  20. Integrated Automatic Workflow for Phylogenetic Tree Analysis Using Public Access and Local Web Services.

    PubMed

    Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat

    2016-03-01

    At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. The all local services have been deployed at our portal http://bioservices.sci.psu.ac.th.

  1. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931

  2. An efficient laboratory workflow for environmental risk assessment of organic chemicals.

    PubMed

    Zhu, Linyan; Santiago-Schübel, Beatrix; Xiao, Hongxia; Thiele, Björn; Zhu, Zhiliang; Qiu, Yanling; Hollert, Henner; Küppers, Stephan

    2015-07-01

    In this study, we demonstrate a fast and efficient workflow to investigate the transformation mechanism of organic chemicals and evaluate the toxicity of their transformation products (TPs) in laboratory scale. The transformation process of organic chemicals was first simulated by electrochemistry coupled online to mass spectrometry (EC-MS). The simulated reactions were scaled up in a batch EC reactor to receive larger amounts of a reaction mixture. The mixture sample was purified and concentrated by solid phase extraction (SPE) for the further ecotoxicological testing. The combined toxicity of the reaction mixture was evaluated in fish egg test (FET) (Danio rerio) compared to the parent compound. The workflow was verified with carbamazepine (CBZ). By using EC-MS seven primary TPs of CBZ were identified; the degradation mechanism was elucidated and confirmed by comparison to literature. The reaction mixture and one primary product (acridine) showed higher ecotoxicity in fish egg assay with 96 h EC50 values of 1.6 and 1.0 mg L(-1) than CBZ with the value of 60.8 mg L(-1). The results highlight the importance of transformation mechanism study and toxicological effect evaluation for organic chemicals brought into the environment since transformation of them may increase the toxicity. The developed process contributes a fast and efficient laboratory method for the risk assessment of organic chemicals and their TPs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Digital Workflows for Restoration and Management of the Museum Affandi - a Case Study in Challenging Circumstances

    NASA Astrophysics Data System (ADS)

    Herbig, U.; Styhler-Aydın, G.; Grandits, D.; Stampfer, L.; Pont, U.; Mayer, I.

    2017-08-01

    The appropriate restoration of architectural heritage needs a careful and comprehensive documentation of the existing structures, which even elaborates, if the function of the building needs special attention, like in museums. In a collaborative project between the Universitas Gadjah Mada, Yogyakarta, Indonesia and two universities in Austria (TU Wien and the Danube University Krems) a restoration and adaptation concept of the Affandi Museum in Yogyakarta is currently in progress. It provides a perfect case study for the development of a workflow to combine data from a building survey, architectural research, indoor climate measurements and the documentation of artwork in a challenging environment, from hot and humid tropical climate to continuous threads by natural hazards like earthquakes or volcanic eruptions. The Affandi Museum houses the collection of Affandi, who is considered to be Indonesia's foremost Expressionist painter and partly designed and constructed the museum by himself. With the spirit of the artist still perceptible in the complex the Affandi Museum is an important part of the Indonesian cultural heritage. Thus its preservation takes special attention and adds to the complexity of the development of a monitoring and maintenance concept. This paper describes the ongoing development of an approach to a workflow from the measurement and research of the objects, both architectural and artwork, to the semantically enriched BIM Model as the base for a sustainable monitoring tool for the Affandi Museum.

  4. VIPER: Visualization Pipeline for RNA-seq, a Snakemake workflow for efficient and complete RNA-seq analysis.

    PubMed

    Cornwell, MacIntosh; Vangala, Mahesh; Taing, Len; Herbert, Zachary; Köster, Johannes; Li, Bo; Sun, Hanfei; Li, Taiwen; Zhang, Jian; Qiu, Xintao; Pun, Matthew; Jeselsohn, Rinath; Brown, Myles; Liu, X Shirley; Long, Henry W

    2018-04-12

    RNA sequencing has become a ubiquitous technology used throughout life sciences as an effective method of measuring RNA abundance quantitatively in tissues and cells. The increase in use of RNA-seq technology has led to the continuous development of new tools for every step of analysis from alignment to downstream pathway analysis. However, effectively using these analysis tools in a scalable and reproducible way can be challenging, especially for non-experts. Using the workflow management system Snakemake we have developed a user friendly, fast, efficient, and comprehensive pipeline for RNA-seq analysis. VIPER (Visualization Pipeline for RNA-seq analysis) is an analysis workflow that combines some of the most popular tools to take RNA-seq analysis from raw sequencing data, through alignment and quality control, into downstream differential expression and pathway analysis. VIPER has been created in a modular fashion to allow for the rapid incorporation of new tools to expand the capabilities. This capacity has already been exploited to include very recently developed tools that explore immune infiltrate and T-cell CDR (Complementarity-Determining Regions) reconstruction abilities. The pipeline has been conveniently packaged such that minimal computational skills are required to download and install the dozens of software packages that VIPER uses. VIPER is a comprehensive solution that performs most standard RNA-seq analyses quickly and effectively with a built-in capacity for customization and expansion.

  5. Improving medical imaging report turnaround times: the role of technolgy.

    PubMed

    Marquez, Luis O; Stewart, Howard

    2005-01-01

    At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department.

  6. MONA – Interactive manipulation of molecule collections

    PubMed Central

    2013-01-01

    Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157

  7. Tools for monitoring system suitability in LC MS/MS centric proteomic experiments.

    PubMed

    Bereman, Michael S

    2015-03-01

    With advances in liquid chromatography coupled to tandem mass spectrometry technologies combined with the continued goals of biomarker discovery, clinical applications of established biomarkers, and integrating large multiomic datasets (i.e. "big data"), there remains an urgent need for robust tools to assess instrument performance (i.e. system suitability) in proteomic workflows. To this end, several freely available tools have been introduced that monitor a number of peptide identification (ID) and/or peptide ID free metrics. Peptide ID metrics include numbers of proteins, peptides, or peptide spectral matches identified from a complex mixture. Peptide ID free metrics include retention time reproducibility, full width half maximum, ion injection times, and integrated peptide intensities. The main driving force in the development of these tools is to monitor both intra- and interexperiment performance variability and to identify sources of variation. The purpose of this review is to summarize and evaluate these tools based on versatility, automation, vendor neutrality, metrics monitored, and visualization capabilities. In addition, the implementation of a robust system suitability workflow is discussed in terms of metrics, type of standard, and frequency of evaluation along with the obstacles to overcome prior to incorporating a more proactive approach to overall quality control in liquid chromatography coupled to tandem mass spectrometry based proteomic workflows. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Flexible End2End Workflow Automation of Hit-Discovery Research.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin

    2014-08-01

    The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.

  9. A rational workflow for sequential virtual screening of chemical libraries on searching for new tyrosinase inhibitors.

    PubMed

    Le-Thi-Thu, Huong; Casanola-Martín, Gerardo M; Marrero-Ponce, Yovani; Rescigno, Antonio; Abad, Concepcion; Khan, Mahmud Tareq Hassan

    2014-01-01

    The tyrosinase is a bifunctional, copper-containing enzyme widely distributed in the phylogenetic tree. This enzyme is involved in the production of melanin and some other pigments in humans, animals and plants, including skin pigmentations in mammals, and browning process in plants and vegetables. Therefore, enzyme inhibitors has been under the attention of the scientist community, due to its broad applications in food, cosmetic, agricultural and medicinal fields, to avoid the undesirable effects of abnormal melanin overproduction. However, the research of novel chemical with antityrosinase activity demands the use of more efficient tools to speed up the tyrosinase inhibitors discovery process. This chapter is focused in the different components of a predictive modeling workflow for the identification and prioritization of potential new compounds with activity against the tyrosinase enzyme. In this case, two structure chemical libraries Spectrum Collection and Drugbank are used in this attempt to combine different virtual screening data mining techniques, in a sequential manner helping to avoid the usually expensive and time consuming traditional methods. Some of the sequential steps summarize here comprise the use of drug-likeness filters, similarity searching, classification and potency QSAR multiclassifier systems, modeling molecular interactions systems, and similarity/diversity analysis. Finally, the methodologies showed here provide a rational workflow for virtual screening hit analysis and selection as a promissory drug discovery strategy for use in target identification phase.

  10. Wireless Mobile Technology to Improve Workflow and Feasibility of MR-Guided Percutaneous Interventions

    PubMed Central

    Rube, Martin A.; Holbrook, Andrew B.; Cox, Benjamin F.; Buciuc, Razvan; Melzer, Andreas

    2015-01-01

    Purpose A wireless interactive display and control device combined with a platform-independent web-based User Interface (UI) was developed to improve the workflow for interventional Magnetic Resonance Imaging (iMRI). Methods The iMRI-UI enables image acquisition of up to three independent slices using various pulse sequences with different contrast weighting. Pulse sequence, scan geometry and related parameters can be changed on the fly via the iMRI-UI using a tablet computer for improved lesion detection and interventional device targeting. The iMRI-UI was validated for core biopsies with a liver phantom (n=40) and Thiel soft-embalmed human cadavers (n=24) in a clinical 1.5T MRI scanner. Results The iMRI-UI components and setup were tested and found conditionally MRI-safe to use according to current ASTM standards. Despite minor temporary touchscreen interference at a close distance to the bore (<20 cm), no other issues regarding quality or imaging artefacts were observed. The 3D root-mean-square distance error was 2.8±1.0 (phantom) / 2.9±0.8 mm (cadaver) and overall procedure times ranged between 12–22 (phantom) / 20–55 minutes (cadaver). Conclusions The wireless iMRI-UI control setup enabled fast and accurate interventional biopsy needle placements along complex trajectories and improved the workflow for percutaneous interventions under MRI guidance in a preclinical trial. PMID:25179151

  11. A Model of Workflow Composition for Emergency Management

    NASA Astrophysics Data System (ADS)

    Xin, Chen; Bin-ge, Cui; Feng, Zhang; Xue-hui, Xu; Shan-shan, Fu

    The common-used workflow technology is not flexible enough in dealing with concurrent emergency situations. The paper proposes a novel model for defining emergency plans, in which workflow segments appear as a constituent part. A formal abstraction, which contains four operations, is defined to compose workflow segments under constraint rule. The software system of the business process resources construction and composition is implemented and integrated into Emergency Plan Management Application System.

  12. A Framework for Modeling Workflow Execution by an Interdisciplinary Healthcare Team.

    PubMed

    Kezadri-Hamiaz, Mounira; Rosu, Daniela; Wilk, Szymon; Kuziemsky, Craig; Michalowski, Wojtek; Carrier, Marc

    2015-01-01

    The use of business workflow models in healthcare is limited because of insufficient capture of complexities associated with behavior of interdisciplinary healthcare teams that execute healthcare workflows. In this paper we present a novel framework that builds on the well-founded business workflow model formalism and related infrastructures and introduces a formal semantic layer that describes selected aspects of team dynamics and supports their real-time operationalization.

  13. Design and implementation of a secure workflow system based on PKI/PMI

    NASA Astrophysics Data System (ADS)

    Yan, Kai; Jiang, Chao-hui

    2013-03-01

    As the traditional workflow system in privilege management has the following weaknesses: low privilege management efficiency, overburdened for administrator, lack of trust authority etc. A secure workflow model based on PKI/PMI is proposed after studying security requirements of the workflow systems in-depth. This model can achieve static and dynamic authorization after verifying user's ID through PKC and validating user's privilege information by using AC in workflow system. Practice shows that this system can meet the security requirements of WfMS. Moreover, it can not only improve system security, but also ensures integrity, confidentiality, availability and non-repudiation of the data in the system.

  14. [Applications of the hospital statistics management system].

    PubMed

    Zhai, Hong; Ren, Yong; Liu, Jing; Li, You-Zhang; Ma, Xiao-Long; Jiao, Tao-Tao

    2008-01-01

    The Hospital Statistics Management System is built on an Office Automation Platform of Shandong provincial hospital system. Its workflow, role and popedom technologies are used to standardize and optimize the management program of statistics in the total quality control of hospital statistics. The system's applications have combined the office automation platform with the statistics management in a hospital and this provides a practical example of a modern hospital statistics management model.

  15. The CASA Software Package

    NASA Astrophysics Data System (ADS)

    Petry, Dirk

    2018-03-01

    CASA is the standard science data analysis package for ALMA and VLA but it can also be used for the analysis of data from other observatories. In this talk, I will give an overview of the structure and features of CASA, who develops it, and the present status and plans, and then show typical analysis workflows for ALMA data with special emphasis on the handling of single dish data and its combination with interferometric data.

  16. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  17. PhenoTips: patient phenotyping software for clinical and research use.

    PubMed

    Girdea, Marta; Dumitriu, Sergiu; Fiume, Marc; Bowdin, Sarah; Boycott, Kym M; Chénier, Sébastien; Chitayat, David; Faghfoury, Hanna; Meyn, M Stephen; Ray, Peter N; So, Joyce; Stavropoulos, Dimitri J; Brudno, Michael

    2013-08-01

    We have developed PhenoTips: open source software for collecting and analyzing phenotypic information for patients with genetic disorders. Our software combines an easy-to-use interface, compatible with any device that runs a Web browser, with a standardized database back end. The PhenoTips' user interface closely mirrors clinician workflows so as to facilitate the recording of observations made during the patient encounter. Collected data include demographics, medical history, family history, physical and laboratory measurements, physical findings, and additional notes. Phenotypic information is represented using the Human Phenotype Ontology; however, the complexity of the ontology is hidden behind a user interface, which combines simple selection of common phenotypes with error-tolerant, predictive search of the entire ontology. PhenoTips supports accurate diagnosis by analyzing the entered data, then suggesting additional clinical investigations and providing Online Mendelian Inheritance in Man (OMIM) links to likely disorders. By collecting, classifying, and analyzing phenotypic information during the patient encounter, PhenoTips allows for streamlining of clinic workflow, efficient data entry, improved diagnosis, standardization of collected patient phenotypes, and sharing of anonymized patient phenotype data for the study of rare disorders. Our source code and a demo version of PhenoTips are available at http://phenotips.org. © 2013 WILEY PERIODICALS, INC.

  18. A data-driven approach for evaluating multi-modal therapy in traumatic brain injury

    PubMed Central

    Haefeli, Jenny; Ferguson, Adam R.; Bingham, Deborah; Orr, Adrienne; Won, Seok Joon; Lam, Tina I.; Shi, Jian; Hawley, Sarah; Liu, Jialing; Swanson, Raymond A.; Massa, Stephen M.

    2017-01-01

    Combination therapies targeting multiple recovery mechanisms have the potential for additive or synergistic effects, but experimental design and analyses of multimodal therapeutic trials are challenging. To address this problem, we developed a data-driven approach to integrate and analyze raw source data from separate pre-clinical studies and evaluated interactions between four treatments following traumatic brain injury. Histologic and behavioral outcomes were measured in 202 rats treated with combinations of an anti-inflammatory agent (minocycline), a neurotrophic agent (LM11A-31), and physical therapy consisting of assisted exercise with or without botulinum toxin-induced limb constraint. Data was curated and analyzed in a linked workflow involving non-linear principal component analysis followed by hypothesis testing with a linear mixed model. Results revealed significant benefits of the neurotrophic agent LM11A-31 on learning and memory outcomes after traumatic brain injury. In addition, modulations of LM11A-31 effects by co-administration of minocycline and by the type of physical therapy applied reached statistical significance. These results suggest a combinatorial effect of drug and physical therapy interventions that was not evident by univariate analysis. The study designs and analytic techniques applied here form a structured, unbiased, internally validated workflow that may be applied to other combinatorial studies, both in animals and humans. PMID:28205533

  19. A data-driven approach for evaluating multi-modal therapy in traumatic brain injury.

    PubMed

    Haefeli, Jenny; Ferguson, Adam R; Bingham, Deborah; Orr, Adrienne; Won, Seok Joon; Lam, Tina I; Shi, Jian; Hawley, Sarah; Liu, Jialing; Swanson, Raymond A; Massa, Stephen M

    2017-02-16

    Combination therapies targeting multiple recovery mechanisms have the potential for additive or synergistic effects, but experimental design and analyses of multimodal therapeutic trials are challenging. To address this problem, we developed a data-driven approach to integrate and analyze raw source data from separate pre-clinical studies and evaluated interactions between four treatments following traumatic brain injury. Histologic and behavioral outcomes were measured in 202 rats treated with combinations of an anti-inflammatory agent (minocycline), a neurotrophic agent (LM11A-31), and physical therapy consisting of assisted exercise with or without botulinum toxin-induced limb constraint. Data was curated and analyzed in a linked workflow involving non-linear principal component analysis followed by hypothesis testing with a linear mixed model. Results revealed significant benefits of the neurotrophic agent LM11A-31 on learning and memory outcomes after traumatic brain injury. In addition, modulations of LM11A-31 effects by co-administration of minocycline and by the type of physical therapy applied reached statistical significance. These results suggest a combinatorial effect of drug and physical therapy interventions that was not evident by univariate analysis. The study designs and analytic techniques applied here form a structured, unbiased, internally validated workflow that may be applied to other combinatorial studies, both in animals and humans.

  20. Validation of the Applied Biosystems RapidFinder Shiga Toxin-Producing E. coli (STEC) Detection Workflow.

    PubMed

    Cloke, Jonathan; Matheny, Sharon; Swimley, Michelle; Tebbs, Robert; Burrell, Angelia; Flannery, Jonathan; Bastin, Benjamin; Bird, Patrick; Benzinger, M Joseph; Crowley, Erin; Agin, James; Goins, David; Salfinger, Yvonne; Brodsky, Michael; Fernandez, Maria Cristina

    2016-11-01

    The Applied Biosystems™ RapidFinder™ STEC Detection Workflow (Thermo Fisher Scientific) is a complete protocol for the rapid qualitative detection of Escherichia coli (E. coli) O157:H7 and the "Big 6" non-O157 Shiga-like toxin-producing E. coli (STEC) serotypes (defined as serogroups: O26, O45, O103, O111, O121, and O145). The RapidFinder STEC Detection Workflow makes use of either the automated preparation of PCR-ready DNA using the Applied Biosystems PrepSEQ™ Nucleic Acid Extraction Kit in conjunction with the Applied Biosystems MagMAX™ Express 96-well magnetic particle processor or the Applied Biosystems PrepSEQ Rapid Spin kit for manual preparation of PCR-ready DNA. Two separate assays comprise the RapidFinder STEC Detection Workflow, the Applied Biosystems RapidFinder STEC Screening Assay and the Applied Biosystems RapidFinder STEC Confirmation Assay. The RapidFinder STEC Screening Assay includes primers and probes to detect the presence of stx1 (Shiga toxin 1), stx2 (Shiga toxin 2), eae (intimin), and E. coli O157 gene targets. The RapidFinder STEC Confirmation Assay includes primers and probes for the "Big 6" non-O157 STEC and E. coli O157:H7. The use of these two assays in tandem allows a user to detect accurately the presence of the "Big 6" STECs and E. coli O157:H7. The performance of the RapidFinder STEC Detection Workflow was evaluated in a method comparison study, in inclusivity and exclusivity studies, and in a robustness evaluation. The assays were compared to the U.S. Department of Agriculture (USDA), Food Safety and Inspection Service (FSIS) Microbiology Laboratory Guidebook (MLG) 5.09: Detection, Isolation and Identification of Escherichia coli O157:H7 from Meat Products and Carcass and Environmental Sponges for raw ground beef (73% lean) and USDA/FSIS-MLG 5B.05: Detection, Isolation and Identification of Escherichia coli non-O157:H7 from Meat Products and Carcass and Environmental Sponges for raw beef trim. No statistically significant differences were observed between the reference method and the individual or combined kits forming the candidate assay using either of the DNA preparation kits (manual or automated extraction). For the inclusivity and exclusivity evaluation, the RapidFinder STEC Detection Workflow, comprising both RapidFinder STEC screening and confirmation kits, correctly identified all 50 target organism isolates and correctly excluded all 30 nontarget strains for both of the assays evaluated. The results of these studies demonstrate the sensitivity and selectivity of the RapidFinder STEC Detection Workflow for the detection of E. coli O157:H7 and the "Big 6" STEC serotypes in both raw ground beef and beef trim. The robustness testing demonstrated that minor variations in the method parameters did not impact the accuracy of the assay and highlighted the importance of following the correct incubation temperatures.

  1. Worklist handling in workflow-enabled radiological application systems

    NASA Astrophysics Data System (ADS)

    Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens

    2000-05-01

    For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.

  2. Nanocuration workflows: Establishing best practices for identifying, inputting, and sharing data to inform decisions on nanomaterials

    PubMed Central

    Powers, Christina M; Mills, Karmann A; Morris, Stephanie A; Klaessig, Fred; Gaheen, Sharon; Lewinski, Nastassja

    2015-01-01

    Summary There is a critical opportunity in the field of nanoscience to compare and integrate information across diverse fields of study through informatics (i.e., nanoinformatics). This paper is one in a series of articles on the data curation process in nanoinformatics (nanocuration). Other articles in this series discuss key aspects of nanocuration (temporal metadata, data completeness, database integration), while the focus of this article is on the nanocuration workflow, or the process of identifying, inputting, and reviewing nanomaterial data in a data repository. In particular, the article discusses: 1) the rationale and importance of a defined workflow in nanocuration, 2) the influence of organizational goals or purpose on the workflow, 3) established workflow practices in other fields, 4) current workflow practices in nanocuration, 5) key challenges for workflows in emerging fields like nanomaterials, 6) examples to make these challenges more tangible, and 7) recommendations to address the identified challenges. Throughout the article, there is an emphasis on illustrating key concepts and current practices in the field. Data on current practices in the field are from a group of stakeholders active in nanocuration. In general, the development of workflows for nanocuration is nascent, with few individuals formally trained in data curation or utilizing available nanocuration resources (e.g., ISA-TAB-Nano). Additional emphasis on the potential benefits of cultivating nanomaterial data via nanocuration processes (e.g., capability to analyze data from across research groups) and providing nanocuration resources (e.g., training) will likely prove crucial for the wider application of nanocuration workflows in the scientific community. PMID:26425437

  3. Implementing bioinformatic workflows within the bioextract server

    USDA-ARS?s Scientific Manuscript database

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  4. Coupling of a continuum ice sheet model and a discrete element calving model using a scientific workflow system

    NASA Astrophysics Data System (ADS)

    Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut

    2017-04-01

    Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).

  5. Isolation and characterization of circulating tumor cells using a novel workflow combining the CellSearch® system and the CellCelector™.

    PubMed

    Neumann, Martin Horst Dieter; Schneck, Helen; Decker, Yvonne; Schömer, Susanne; Franken, André; Endris, Volker; Pfarr, Nicole; Weichert, Wilko; Niederacher, Dieter; Fehm, Tanja; Neubauer, Hans

    2017-01-01

    Circulating tumor cells (CTC) are rare cells which have left the primary tumor to enter the blood stream. Although only a small CTC subgroup is capable of extravasating, the presence of CTCs is associated with an increased risk of metastasis and a shorter overall survival. Understanding the heterogeneous CTC biology will optimize treatment decisions and will thereby improve patient outcome. For this, robust workflows for detection and isolation of CTCs are urgently required. Here, we present a workflow to characterize CTCs by combining the advantages of both the CellSearch ® and the CellCelector™ micromanipulation system. CTCs were isolated from CellSearch ® cartridges using the CellCelector™ system and were deposited into PCR tubes for subsequent molecular analysis (whole genome amplification (WGA) and massive parallel multigene sequencing). By a CellCelector™ screen we reidentified 97% of CellSearch ® SKBR-3 cells. Furthermore, we isolated 97% of CellSearch ® -proven patient CTCs using the CellCelector™ system. Therein, we found an almost perfect correlation of R 2  = 0.98 (Spearman's rho correlation, n = 20, p < 0.00001) between the CellSearch ® CTC count (n = 271) and the CellCelector™ detected CTCs (n = 252). Isolated CTCs were analyzed by WGA and massive parallel multigene sequencing. In total, single nucleotide polymorphisms (SNPs) could be detected in 50 genes in seven CTCs, 12 MCF-7, and 3 T47D cells, respectively. Taken together, CTC quantification via the CellCelector™ system ensures a comprehensive detection of CTCs preidentified by the CellSearch ® system. Moreover, the isolation of CTCs after CellSearch ® using the CellCelector™ system guarantees for CTC enrichment without any contaminants enabling subsequent high throughput genomic analyses on single cell level. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:125-132, 2017. © 2016 American Institute of Chemical Engineers.

  6. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2015-06-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.

    PubMed

    Hartman, Douglas J

    2016-03-01

    Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Integrating prediction, provenance, and optimization into high energy workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schram, M.; Bansal, V.; Friese, R. D.

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  9. Big Data Challenges in Global Seismic 'Adjoint Tomography' (Invited)

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Smith, J.

    2013-12-01

    The challenge of imaging Earth's interior on a global scale is closely linked to the challenge of handling large data sets. The related iterative workflow involves five distinct phases, namely, 1) data gathering and culling, 2) synthetic seismogram calculations, 3) pre-processing (time-series analysis and time-window selection), 4) data assimilation and adjoint calculations, 5) post-processing (pre-conditioning, regularization, model update). In order to implement this workflow on modern high-performance computing systems, a new seismic data format is being developed. The Adaptable Seismic Data Format (ASDF) is designed to replace currently used data formats with a more flexible format that allows for fast parallel I/O. The metadata is divided into abstract categories, such as "source" and "receiver", along with provenance information for complete reproducibility. The structure of ASDF is designed keeping in mind three distinct applications: earthquake seismology, seismic interferometry, and exploration seismology. Existing time-series analysis tool kits, such as SAC and ObsPy, can be easily interfaced with ASDF so that seismologists can use robust, previously developed software packages. ASDF accommodates an automated, efficient workflow for global adjoint tomography. Manually managing the large number of simulations associated with the workflow can rapidly become a burden, especially with increasing numbers of earthquakes and stations. Therefore, it is of importance to investigate the possibility of automating the entire workflow. Scientific Workflow Management Software (SWfMS) allows users to execute workflows almost routinely. SWfMS provides additional advantages. In particular, it is possible to group independent simulations in a single job to fit the available computational resources. They also give a basic level of fault resilience as the workflow can be resumed at the correct state preceding a failure. Some of the best candidates for our particular workflow are Kepler and Swift, and the latter appears to be the most serious candidate for a large-scale workflow on a single supercomputer, remaining sufficiently simple to accommodate further modifications and improvements.

  10. Comparison of manual and automated AmpliSeq™ workflows in the typing of a Somali population with the Precision ID Identity Panel.

    PubMed

    van der Heijden, Suzanne; de Oliveira, Susanne Juel; Kampmann, Marie-Louise; Børsting, Claus; Morling, Niels

    2017-11-01

    The Precision ID Identity Panel was used to type 109 Somali individuals in order to obtain allele frequencies for the Somali population. These frequencies were used to establish a Somali HID-SNP database, which will be used for the biostatistic calculations in family and immigration cases. Genotypes obtained with the Precision ID Identity Panel were found to be almost in complete concordance with genotypes obtained with the SNPforID PCR-SBE-CE assay. In seven SNP loci, silent alleles were identified, of which most were previously described in the literature. The project also set out to compare different AmpliSeq™ workflows to investigate the possibility of using automated library building in forensic genetic case work. In order to do so, the SNP typing of the Somalis was performed using three different workflows: 1) manual library building and sequencing on the Ion PGM™, 2) automated library building using the Biomek ® 3000 and sequencing on the Ion PGM™, and 3) automated library building using the Ion Chef™ and sequencing on the Ion S5™. AmpliSeq™ workflows were compared based on coverage, locus balance, noise, and heterozygote balance. Overall, the Ion Chef™/Ion S5™ workflow was found to give the best results and required least hands-on time in the laboratory. However, the Ion Chef™/Ion S5™ workflow was also the most expensive. The number of libraries that may be constructed in one Ion Chef™ library building run was limited to eight, which is too little for high throughput workflows. The Biomek ® 3000/Ion PGM™ workflow was found to perform similarly to the manual/Ion PGM™ workflow. This argues for the use of automated library building in forensic genetic case work. Automated library building decreases the workload of the laboratory staff, decreases the risk of pipetting errors, and simplifies the daily workflow in forensic genetic laboratories. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A scientific workflow framework for (13)C metabolic flux analysis.

    PubMed

    Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina

    2016-08-20

    Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  13. BPELPower—A BPEL execution engine for geospatial web services

    NASA Astrophysics Data System (ADS)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  14. Performance of an Automated Versus a Manual Whole-Body Magnetic Resonance Imaging Workflow.

    PubMed

    Stocker, Daniel; Finkenstaedt, Tim; Kuehn, Bernd; Nanz, Daniel; Klarhoefer, Markus; Guggenberger, Roman; Andreisek, Gustav; Kiefer, Berthold; Reiner, Caecilia S

    2018-04-24

    The aim of this study was to evaluate the performance of an automated workflow for whole-body magnetic resonance imaging (WB-MRI), which reduces user interaction compared with the manual WB-MRI workflow. This prospective study was approved by the local ethics committee. Twenty patients underwent WB-MRI for myopathy evaluation on a 3 T MRI scanner. Ten patients (7 women; age, 52 ± 13 years; body weight, 69.9 ± 13.3 kg; height, 173 ± 9.3 cm; body mass index, 23.2 ± 3.0) were examined with a prototypical automated WB-MRI workflow, which automatically segments the whole body, and 10 patients (6 women; age, 35.9 ± 12.4 years; body weight, 72 ± 21 kg; height, 169.2 ± 10.4 cm; body mass index, 24.9 ± 5.6) with a manual scan. Overall image quality (IQ; 5-point scale: 5, excellent; 1, poor) and coverage of the study volume were assessed by 2 readers for each sequence (coronal T2-weighted turbo inversion recovery magnitude [TIRM] and axial contrast-enhanced T1-weighted [ce-T1w] gradient dual-echo sequence). Interreader agreement was evaluated with intraclass correlation coefficients. Examination time, number of user interactions, and MR technicians' acceptance rating (1, highest; 10, lowest) was compared between both groups. Total examination time was significantly shorter for automated WB-MRI workflow versus manual WB-MRI workflow (30.0 ± 4.2 vs 41.5 ± 3.4 minutes, P < 0.0001) with significantly shorter planning time (2.5 ± 0.8 vs 14.0 ± 7.0 minutes, P < 0.0001). Planning took 8% of the total examination time with automated versus 34% with manual WB-MRI workflow (P < 0.0001). The number of user interactions with automated WB-MRI workflow was significantly lower compared with manual WB-MRI workflow (10.2 ± 4.4 vs 48.2 ± 17.2, P < 0.0001). Planning efforts were rated significantly lower by the MR technicians for the automated WB-MRI workflow than for the manual WB-MRI workflow (2.20 ± 0.92 vs 4.80 ± 2.39, respectively; P = 0.005). Overall IQ was similar between automated and manual WB-MRI workflow (TIRM: 4.00 ± 0.94 vs 3.45 ± 1.19, P = 0.264; ce-T1w: 4.20 ± 0.88 vs 4.55 ± .55, P = 0.423). Interreader agreement for overall IQ was excellent for TIRM and ce-T1w with an intraclass correlation coefficient of 0.95 (95% confidence interval, 0.86-0.98) and 0.88 (95% confidence interval, 0.70-0.95). Incomplete coverage of the thoracic compartment in the ce-T1w sequence occurred more often in the automated WB-MRI workflow (P = 0.008) for reader 2. No other significant differences in the study volume coverage were found. In conclusion, the automated WB-MRI scanner workflow showed a significant reduction of the examination time and the user interaction compared with the manual WB-MRI workflow. Image quality and the coverage of the study volume were comparable in both groups.

  15. Climate Science Performance, Data and Productivity on Titan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Benjamin W; Worley, Patrick H; Gaddis, Abigail L

    2015-01-01

    Climate Science models are flagship codes for the largest of high performance computing (HPC) resources, both in visibility, with the newly launched Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) effort, and in terms of significant fractions of system usage. The performance of the DOE ACME model is captured with application level timers and examined through a sizeable run archive. Performance and variability of compute, queue time and ancillary services are examined. As Climate Science advances in the use of HPC resources there has been an increase in the required human and data systems to achieve programs goals.more » A description of current workflow processes (hardware, software, human) and planned automation of the workflow, along with historical and projected data in motion and at rest data usage, are detailed. The combination of these two topics motivates a description of future systems requirements for DOE Climate Modeling efforts, focusing on the growth of data storage and network and disk bandwidth required to handle data at an acceptable rate.« less

  16. Performances of the PIPER scalable child human body model in accident reconstruction

    PubMed Central

    Giordano, Chiara; Kleiven, Svein

    2017-01-01

    Human body models (HBMs) have the potential to provide significant insights into the pediatric response to impact. This study describes a scalable/posable approach to perform child accident reconstructions using the Position and Personalize Advanced Human Body Models for Injury Prediction (PIPER) scalable child HBM of different ages and in different positions obtained by the PIPER tool. Overall, the PIPER scalable child HBM managed reasonably well to predict the injury severity and location of the children involved in real-life crash scenarios documented in the medical records. The developed methodology and workflow is essential for future work to determine child injury tolerances based on the full Child Advanced Safety Project for European Roads (CASPER) accident reconstruction database. With the workflow presented in this study, the open-source PIPER scalable HBM combined with the PIPER tool is also foreseen to have implications for improved safety designs for a better protection of children in traffic accidents. PMID:29135997

  17. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  18. MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.

    PubMed

    Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y

    2018-01-02

    Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .

  19. Matches, Mismatches, and Methods: Multiple-View Workflows for Energy Portfolio Analysis.

    PubMed

    Brehmer, Matthew; Ng, Jocelyn; Tate, Kevin; Munzner, Tamara

    2016-01-01

    The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted a visualization design study, beginning with a thorough work domain analysis and a characterization of data and task abstractions. We describe generalizable visual encoding design choices for time-oriented data framed in terms of matches and mismatches, as well as considerations for workflow design. Our designs address several research questions pertaining to scalability, view coordination, and the inappropriateness of line charts for derived and aggregated data due to a combination of data semantics and domain convention. We also present guidelines relating to familiarity and trust, as well as methodological considerations for visualization design studies. Our designs were adopted by our collaborators and incorporated into the design of an energy analysis software application that will be deployed to tens of thousands of energy workers in their client base.

  20. Leveraging Existing Heritage Documentation for Animations: Senate Virtual Tour

    NASA Astrophysics Data System (ADS)

    Dhanda, A.; Fai, S.; Graham, K.; Walczak, G.

    2017-08-01

    The use of digital documentation techniques has led to an increase in opportunities for using documentation data for valorization purposes, in addition to technical purposes. Likewise, building information models (BIMs) made from these data sets hold valuable information that can be as effective for public education as it is for rehabilitation. A BIM can reveal the elements of a building, as well as the different stages of a building over time. Valorizing this information increases the possibility for public engagement and interest in a heritage place. Digital data sets were leveraged by the Carleton Immersive Media Studio (CIMS) for parts of a virtual tour of the Senate of Canada. For the tour, workflows involving four different programs were explored to determine an efficient and effective way to leverage the existing documentation data to create informative and visually enticing animations for public dissemination: Autodesk Revit, Enscape, Autodesk 3ds Max, and Bentley Pointools. The explored workflows involve animations of point clouds, BIMs, and a combination of the two.

  1. TruSeq Stranded mRNA and Total RNA Sample Preparation Kits

    Cancer.gov

    Total RNA-Seq enabled by ribosomal RNA (rRNA) reduction is compatible with formalin-fixed paraffin embedded (FFPE) samples, which contain potentially critical biological information. The family of TruSeq Stranded Total RNA sample preparation kits provides a unique combination of unmatched data quality for both mRNA and whole-transcriptome analyses, robust interrogation of both standard and low-quality samples and workflows compatible with a wide range of study designs.

  2. Integrated workflows for spiking neuronal network simulations

    PubMed Central

    Antolík, Ján; Davison, Andrew P.

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902

  3. Integrated workflows for spiking neuronal network simulations.

    PubMed

    Antolík, Ján; Davison, Andrew P

    2013-01-01

    The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.

  4. Text mining meets workflow: linking U-Compare with Taverna

    PubMed Central

    Kano, Yoshinobu; Dobson, Paul; Nakanishi, Mio; Tsujii, Jun'ichi; Ananiadou, Sophia

    2010-01-01

    Summary: Text mining from the biomedical literature is of increasing importance, yet it is not easy for the bioinformatics community to create and run text mining workflows due to the lack of accessibility and interoperability of the text mining resources. The U-Compare system provides a wide range of bio text mining resources in a highly interoperable workflow environment where workflows can very easily be created, executed, evaluated and visualized without coding. We have linked U-Compare to Taverna, a generic workflow system, to expose text mining functionality to the bioinformatics community. Availability: http://u-compare.org/taverna.html, http://u-compare.org Contact: kano@is.s.u-tokyo.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20709690

  5. wft4galaxy: a workflow testing tool for galaxy.

    PubMed

    Piras, Marco Enrico; Pireddu, Luca; Zanetti, Gianluigi

    2017-12-01

    Workflow managers for scientific analysis provide a high-level programming platform facilitating standardization, automation, collaboration and access to sophisticated computing resources. The Galaxy workflow manager provides a prime example of this type of platform. As compositions of simpler tools, workflows effectively comprise specialized computer programs implementing often very complex analysis procedures. To date, no simple way to automatically test Galaxy workflows and ensure their correctness has appeared in the literature. With wft4galaxy we offer a tool to bring automated testing to Galaxy workflows, making it feasible to bring continuous integration to their development and ensuring that defects are detected promptly. wft4galaxy can be easily installed as a regular Python program or launched directly as a Docker container-the latter reducing installation effort to a minimum. Available at https://github.com/phnmnl/wft4galaxy under the Academic Free License v3.0. marcoenrico.piras@crs4.it. © The Author 2017. Published by Oxford University Press.

  6. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  7. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  8. A practical workflow for making anatomical atlases for biological research.

    PubMed

    Wan, Yong; Lewis, A Kelsey; Colasanto, Mary; van Langeveld, Mark; Kardon, Gabrielle; Hansen, Charles

    2012-01-01

    The anatomical atlas has been at the intersection of science and art for centuries. These atlases are essential to biological research, but high-quality atlases are often scarce. Recent advances in imaging technology have made high-quality 3D atlases possible. However, until now there has been a lack of practical workflows using standard tools to generate atlases from images of biological samples. With certain adaptations, CG artists' workflow and tools, traditionally used in the film industry, are practical for building high-quality biological atlases. Researchers have developed a workflow for generating a 3D anatomical atlas using accessible artists' tools. They used this workflow to build a mouse limb atlas for studying the musculoskeletal system's development. This research aims to raise the awareness of using artists' tools in scientific research and promote interdisciplinary collaborations between artists and scientists. This video (http://youtu.be/g61C-nia9ms) demonstrates a workflow for creating an anatomical atlas.

  9. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  10. Separating Business Logic from Medical Knowledge in Digital Clinical Workflows Using Business Process Model and Notation and Arden Syntax.

    PubMed

    de Bruin, Jeroen S; Adlassnig, Klaus-Peter; Leitich, Harald; Rappelsberger, Andrea

    2018-01-01

    Evidence-based clinical guidelines have a major positive effect on the physician's decision-making process. Computer-executable clinical guidelines allow for automated guideline marshalling during a clinical diagnostic process, thus improving the decision-making process. Implementation of a digital clinical guideline for the prevention of mother-to-child transmission of hepatitis B as a computerized workflow, thereby separating business logic from medical knowledge and decision-making. We used the Business Process Model and Notation language system Activiti for business logic and workflow modeling. Medical decision-making was performed by an Arden-Syntax-based medical rule engine, which is part of the ARDENSUITE software. We succeeded in creating an electronic clinical workflow for the prevention of mother-to-child transmission of hepatitis B, where institution-specific medical decision-making processes could be adapted without modifying the workflow business logic. Separation of business logic and medical decision-making results in more easily reusable electronic clinical workflows.

  11. CamBAfx: Workflow Design, Implementation and Application for Neuroimaging

    PubMed Central

    Ooi, Cinly; Bullmore, Edward T.; Wink, Alle-Meije; Sendur, Levent; Barnes, Anna; Achard, Sophie; Aspden, John; Abbott, Sanja; Yue, Shigang; Kitzbichler, Manfred; Meunier, David; Maxim, Voichita; Salvador, Raymond; Henty, Julian; Tait, Roger; Subramaniam, Naresh; Suckling, John

    2009-01-01

    CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user interface) optimized for data processing designed in a way familiar to consumers. The back-end uses a pipeline model to represent workflows since this is a common and useful metaphor used by designers and is easy to manipulate compared to other representations like programming scripts. As an Eclipse Rich Client Platform application, CamBAfx's pipelines and functions can be bundled with the software or downloaded post-installation. The user interface contains all the workflow facilities expected by consumers. Using the Eclipse Extension Mechanism designers are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workflow facility around neuroinformatics software without modification. CamBAfx's design, licensing and Eclipse Branding Mechanism allow it to be used as the user interface for other software, facilitating exchange of innovative computational tools between originating labs. PMID:19826470

  12. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  13. Canine neuroanatomy: Development of a 3D reconstruction and interactive application for undergraduate veterinary education

    PubMed Central

    Raffan, Hazel; Guevar, Julien; Poyade, Matthieu; Rea, Paul M.

    2017-01-01

    Current methods used to communicate and present the complex arrangement of vasculature related to the brain and spinal cord is limited in undergraduate veterinary neuroanatomy training. Traditionally it is taught with 2-dimensional (2D) diagrams, photographs and medical imaging scans which show a fixed viewpoint. 2D representations of 3-dimensional (3D) objects however lead to loss of spatial information, which can present problems when translating this to the patient. Computer-assisted learning packages with interactive 3D anatomical models have become established in medical training, yet equivalent resources are scarce in veterinary education. For this reason, we set out to develop a workflow methodology creating an interactive model depicting the vasculature of the canine brain that could be used in undergraduate education. Using MR images of a dog and several commonly available software programs, we set out to show how combining image editing, segmentation and surface generation, 3D modeling and texturing can result in the creation of a fully interactive application for veterinary training. In addition to clearly identifying a workflow methodology for the creation of this dataset, we have also demonstrated how an interactive tutorial and self-assessment tool can be incorporated into this. In conclusion, we present a workflow which has been successful in developing a 3D reconstruction of the canine brain and associated vasculature through segmentation, surface generation and post-processing of readily available medical imaging data. The reconstructed model was implemented into an interactive application for veterinary education that has been designed to target the problems associated with learning neuroanatomy, primarily the inability to visualise complex spatial arrangements from 2D resources. The lack of similar resources in this field suggests this workflow is original within a veterinary context. There is great potential to explore this method, and introduce a new dimension into veterinary education and training. PMID:28192461

  14. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  15. MC-GenomeKey: a multicloud system for the detection and annotation of genomic variants.

    PubMed

    Elshazly, Hatem; Souilmi, Yassine; Tonellato, Peter J; Wall, Dennis P; Abouelhoda, Mohamed

    2017-01-20

    Next Generation Genome sequencing techniques became affordable for massive sequencing efforts devoted to clinical characterization of human diseases. However, the cost of providing cloud-based data analysis of the mounting datasets remains a concerning bottleneck for providing cost-effective clinical services. To address this computational problem, it is important to optimize the variant analysis workflow and the used analysis tools to reduce the overall computational processing time, and concomitantly reduce the processing cost. Furthermore, it is important to capitalize on the use of the recent development in the cloud computing market, which have witnessed more providers competing in terms of products and prices. In this paper, we present a new package called MC-GenomeKey (Multi-Cloud GenomeKey) that efficiently executes the variant analysis workflow for detecting and annotating mutations using cloud resources from different commercial cloud providers. Our package supports Amazon, Google, and Azure clouds, as well as, any other cloud platform based on OpenStack. Our package allows different scenarios of execution with different levels of sophistication, up to the one where a workflow can be executed using a cluster whose nodes come from different clouds. MC-GenomeKey also supports scenarios to exploit the spot instance model of Amazon in combination with the use of other cloud platforms to provide significant cost reduction. To the best of our knowledge, this is the first solution that optimizes the execution of the workflow using computational resources from different cloud providers. MC-GenomeKey provides an efficient multicloud based solution to detect and annotate mutations. The package can run in different commercial cloud platforms, which enables the user to seize the best offers. The package also provides a reliable means to make use of the low-cost spot instance model of Amazon, as it provides an efficient solution to the sudden termination of spot machines as a result of a sudden price increase. The package has a web-interface and it is available for free for academic use.

  16. New ArcGIS tools developed for stream network extraction and basin delineations using Python and java script

    NASA Astrophysics Data System (ADS)

    Omran, Adel; Dietrich, Schröder; Abouelmagd, Abdou; Michael, Märker

    2016-09-01

    Damages caused by flash floods hazards are an increasing phenomenon, especially in arid and semi-arid areas. Thus, the need to evaluate these areas based on their flash flood risk using maps and hydrological models is also becoming more important. For ungauged watersheds a tentative analysis can be carried out based on the geomorphometric characteristics of the terrain. To process regions with larger watersheds, where perhaps hundreds of watersheds have to be delineated, processed and classified, the overall process need to be automated. GIS packages such as ESRI's ArcGIS offer a number of sophisticated tools that help regarding such analysis. Yet there are still gaps and pitfalls that need to be considered if the tools are combined into a geoprocessing model to automate the complete assessment workflow. These gaps include issues such as i) assigning stream order according to Strahler theory, ii) calculating the threshold value for the stream network extraction, and iii) determining the pour points for each of the nodes of the Strahler ordered stream network. In this study a complete automated workflow based on ArcGIS Model Builder using standard tools will be introduced and discussed. Some additional tools have been implemented to complete the overall workflow. These tools have been programmed using Python and Java in the context of ArcObjects. The workflow has been applied to digital data from the southwestern Sinai Peninsula, Egypt. An optimum threshold value has been selected to optimize drainage configuration by statistically comparing all of the extracted stream configuration results from DEM with the available reference data from topographic maps. The code has succeeded in estimating the correct ranking of specific stream orders in an automatic manner without additional manual steps. As a result, the code has proven to save time and efforts; hence it's considered a very useful tool for processing large catchment basins.

  17. Authentication systems for securing clinical documentation workflows. A systematic literature review.

    PubMed

    Schwartze, J; Haarbrandt, B; Fortmeier, D; Haux, R; Seidel, C

    2014-01-01

    Integration of electronic signatures embedded in health care processes in Germany challenges health care service and supply facilities. The suitability of the signature level of an eligible authentication procedure is confirmed for a large part of documents in clinical practice. However, the concrete design of such a procedure remains unclear. To create a summary of usable user authentication systems suitable for clinical workflows. A Systematic literature review based on nine online bibliographic databases. Search keywords included authentication, access control, information systems, information security and biometrics with terms user authentication, user identification and login in title or abstract. Searches were run between 7 and 12 September 2011. Relevant conference proceedings were searched manually in February 2013. Backward reference search of selected results was done. Only publications fully describing authentication systems used or usable were included. Algorithms or purely theoretical concepts were excluded. Three authors did selection independently. DATA EXTRACTION AND ASSESSMENT: Semi-structured extraction of system characteristics was done by the main author. Identified procedures were assessed for security and fulfillment of relevant laws and guidelines as well as for applicability. Suitability for clinical workflows was derived from the assessments using a weighted sum proposed by Bonneau. Of 7575 citations retrieved, 55 publications meet our inclusion criteria. They describe 48 different authentication systems; 39 were biometric and nine graphical password systems. Assessment of authentication systems showed high error rates above European CENELEC standards and a lack of applicability of biometric systems. Graphical passwords did not add overall value compared to conventional passwords. Continuous authentication can add an additional layer of safety. Only few systems are suitable partially or entirely for use in clinical processes. Suitability strongly depends on national or institutional requirements. Four authentication systems seem to fulfill requirements of authentication procedures for clinical workflows. Research is needed in the area of continuous authentication with biometric methods. A proper authentication system should combine all factors of authentication implementing and connecting secure individual measures.

  18. Canine neuroanatomy: Development of a 3D reconstruction and interactive application for undergraduate veterinary education.

    PubMed

    Raffan, Hazel; Guevar, Julien; Poyade, Matthieu; Rea, Paul M

    2017-01-01

    Current methods used to communicate and present the complex arrangement of vasculature related to the brain and spinal cord is limited in undergraduate veterinary neuroanatomy training. Traditionally it is taught with 2-dimensional (2D) diagrams, photographs and medical imaging scans which show a fixed viewpoint. 2D representations of 3-dimensional (3D) objects however lead to loss of spatial information, which can present problems when translating this to the patient. Computer-assisted learning packages with interactive 3D anatomical models have become established in medical training, yet equivalent resources are scarce in veterinary education. For this reason, we set out to develop a workflow methodology creating an interactive model depicting the vasculature of the canine brain that could be used in undergraduate education. Using MR images of a dog and several commonly available software programs, we set out to show how combining image editing, segmentation and surface generation, 3D modeling and texturing can result in the creation of a fully interactive application for veterinary training. In addition to clearly identifying a workflow methodology for the creation of this dataset, we have also demonstrated how an interactive tutorial and self-assessment tool can be incorporated into this. In conclusion, we present a workflow which has been successful in developing a 3D reconstruction of the canine brain and associated vasculature through segmentation, surface generation and post-processing of readily available medical imaging data. The reconstructed model was implemented into an interactive application for veterinary education that has been designed to target the problems associated with learning neuroanatomy, primarily the inability to visualise complex spatial arrangements from 2D resources. The lack of similar resources in this field suggests this workflow is original within a veterinary context. There is great potential to explore this method, and introduce a new dimension into veterinary education and training.

  19. Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Ramachandran, R.; Lynnes, C.

    2009-05-01

    A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).

  20. Improving adherence to the Epic Beacon ambulatory workflow.

    PubMed

    Chackunkal, Ellen; Dhanapal Vogel, Vishnuprabha; Grycki, Meredith; Kostoff, Diana

    2017-06-01

    Computerized physician order entry has been shown to significantly improve chemotherapy safety by reducing the number of prescribing errors. Epic's Beacon Oncology Information System of computerized physician order entry and electronic medication administration was implemented in Henry Ford Health System's ambulatory oncology infusion centers on 9 November 2013. Since that time, compliance to the infusion workflow had not been assessed. The objective of this study was to optimize the current workflow and improve the compliance to this workflow in the ambulatory oncology setting. This study was a retrospective, quasi-experimental study which analyzed the composite workflow compliance rate of patient encounters from 9 to 23 November 2014. Based on this analysis, an intervention was identified and implemented in February 2015 to improve workflow compliance. The primary endpoint was to compare the composite compliance rate to the Beacon workflow before and after a pharmacy-initiated intervention. The intervention, which was education of infusion center staff, was initiated by ambulatory-based, oncology pharmacists and implemented by a multi-disciplinary team of pharmacists and nurses. The composite compliance rate was then reassessed for patient encounters from 2 to 13 March 2015 in order to analyze the effects of the determined intervention on compliance. The initial analysis in November 2014 revealed a composite compliance rate of 38%, and data analysis after the intervention revealed a statistically significant increase in the composite compliance rate to 83% ( p < 0.001). This study supports a pharmacist-initiated educational intervention can improve compliance to an ambulatory, oncology infusion workflow.

  1. Exploring Dental Providers’ Workflow in an Electronic Dental Record Environment

    PubMed Central

    Schwei, Kelsey M; Cooper, Ryan; Mahnke, Andrea N.; Ye, Zhan

    2016-01-01

    Summary Background A workflow is defined as a predefined set of work steps and partial ordering of these steps in any environment to achieve the expected outcome. Few studies have investigated the workflow of providers in a dental office. It is important to understand the interaction of dental providers with the existing technologies at point of care to assess breakdown in the workflow which could contribute to better technology designs. Objective The study objective was to assess electronic dental record (EDR) workflows using time and motion methodology in order to identify breakdowns and opportunities for process improvement. Methods A time and motion methodology was used to study the human-computer interaction and workflow of dental providers with an EDR in four dental centers at a large healthcare organization. A data collection tool was developed to capture the workflow of dental providers and staff while they interacted with an EDR during initial, planned, and emergency patient visits, and at the front desk. Qualitative and quantitative analysis was conducted on the observational data. Results Breakdowns in workflow were identified while posting charges, viewing radiographs, e-prescribing, and interacting with patient scheduler. EDR interaction time was significantly different between dentists and dental assistants (6:20 min vs. 10:57 min, p = 0.013) and between dentists and dental hygienists (6:20 min vs. 9:36 min, p = 0.003). Conclusions On average, a dentist spent far less time than dental assistants and dental hygienists in data recording within the EDR. PMID:27437058

  2. Workflow continuity--moving beyond business continuity in a multisite 24-7 healthcare organization.

    PubMed

    Kolowitz, Brian J; Lauro, Gonzalo Romero; Barkey, Charles; Black, Harry; Light, Karen; Deible, Christopher

    2012-12-01

    As hospitals move towards providing in-house 24 × 7 services, there is an increasing need for information systems to be available around the clock. This study investigates one organization's need for a workflow continuity solution that provides around the clock availability for information systems that do not provide highly available services. The organization investigated is a large multifacility healthcare organization that consists of 20 hospitals and more than 30 imaging centers. A case analysis approach was used to investigate the organization's efforts. The results show an overall reduction in downtimes where radiologists could not continue their normal workflow on the integrated Picture Archiving and Communications System (PACS) solution by 94 % from 2008 to 2011. The impact of unplanned downtimes was reduced by 72 % while the impact of planned downtimes was reduced by 99.66 % over the same period. Additionally more than 98 h of radiologist impact due to a PACS upgrade in 2008 was entirely eliminated in 2011 utilizing the system created by the workflow continuity approach. Workflow continuity differs from high availability and business continuity in its design process and available services. Workflow continuity only ensures that critical workflows are available when the production system is unavailable due to scheduled or unscheduled downtimes. Workflow continuity works in conjunction with business continuity and highly available system designs. The results of this investigation revealed that this approach can add significant value to organizations because impact on users is minimized if not eliminated entirely.

  3. 78 FR 22880 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-17

    ... between Health IT and Ambulatory Care Workflow Redesign.'' In accordance with the Paperwork Reduction Act... Understand the Relationship between Health IT and Ambulatory Care Workflow Redesign. The Agency for... Methods to Better Understand the Relationship between Health IT and Ambulatory Care Workflow Redesign...

  4. Online time and resource management based on surgical workflow time series analysis.

    PubMed

    Maktabi, M; Neumuth, T

    2017-02-01

    Hospitals' effectiveness and efficiency can be enhanced by automating the resource and time management of the most cost-intensive unit in the hospital: the operating room (OR). The key elements required for the ideal organization of hospital staff and technical resources (such as instruments in the OR) are an exact online forecast of both the surgeon's resource usage and the remaining intervention time. This paper presents a novel online approach relying on time series analysis and the application of a linear time-variant system. We calculated the power spectral density and the spectrogram of surgical perspectives (e.g., used instrument) of interest to compare several surgical workflows. Considering only the use of the surgeon's right hand during an intervention, we were able to predict the remaining intervention time online with an error of 21 min 45 s ±9 min 59 s for lumbar discectomy. Furthermore, the performance of forecasting of technical resource usage in the next 20 min was calculated for a combination of spectral analysis and the application of a linear time-variant system (sensitivity: 74 %; specificity: 75 %) focusing on just the use of surgeon's instrument in question. The outstanding benefit of these methods is that the automated recording of surgical workflows has minimal impact during interventions since the whole set of surgical perspectives need not be recorded. The resulting predictions can help various stakeholders such as OR staff and hospital technicians. Moreover, reducing resource conflicts could well improve patient care.

  5. Bidirectional Retroviral Integration Site PCR Methodology and Quantitative Data Analysis Workflow.

    PubMed

    Suryawanshi, Gajendra W; Xu, Song; Xie, Yiming; Chou, Tom; Kim, Namshin; Chen, Irvin S Y; Kim, Sanggu

    2017-06-14

    Integration Site (IS) assays are a critical component of the study of retroviral integration sites and their biological significance. In recent retroviral gene therapy studies, IS assays, in combination with next-generation sequencing, have been used as a cell-tracking tool to characterize clonal stem cell populations sharing the same IS. For the accurate comparison of repopulating stem cell clones within and across different samples, the detection sensitivity, data reproducibility, and high-throughput capacity of the assay are among the most important assay qualities. This work provides a detailed protocol and data analysis workflow for bidirectional IS analysis. The bidirectional assay can simultaneously sequence both upstream and downstream vector-host junctions. Compared to conventional unidirectional IS sequencing approaches, the bidirectional approach significantly improves IS detection rates and the characterization of integration events at both ends of the target DNA. The data analysis pipeline described here accurately identifies and enumerates identical IS sequences through multiple steps of comparison that map IS sequences onto the reference genome and determine sequencing errors. Using an optimized assay procedure, we have recently published the detailed repopulation patterns of thousands of Hematopoietic Stem Cell (HSC) clones following transplant in rhesus macaques, demonstrating for the first time the precise time point of HSC repopulation and the functional heterogeneity of HSCs in the primate system. The following protocol describes the step-by-step experimental procedure and data analysis workflow that accurately identifies and quantifies identical IS sequences.

  6. Multidimensional electrostatic repulsion-hydrophilic interaction chromatography (ERLIC) for quantitative analysis of the proteome and phosphoproteome in clinical and biomedical research.

    PubMed

    Loroch, Stefan; Schommartz, Tim; Brune, Wolfram; Zahedi, René Peiman; Sickmann, Albert

    2015-05-01

    Quantitative proteomics and phosphoproteomics have become key disciplines in understanding cellular processes. Fundamental research can be done using cell culture providing researchers with virtually infinite sample amounts. In contrast, clinical, pre-clinical and biomedical research is often restricted to minute sample amounts and requires an efficient analysis with only micrograms of protein. To address this issue, we generated a highly sensitive workflow for combined LC-MS-based quantitative proteomics and phosphoproteomics by refining an ERLIC-based 2D phosphoproteomics workflow into an ERLIC-based 3D workflow covering the global proteome as well. The resulting 3D strategy was successfully used for an in-depth quantitative analysis of both, the proteome and the phosphoproteome of murine cytomegalovirus-infected mouse fibroblasts, a model system for host cell manipulation by a virus. In a 2-plex SILAC experiment with 150 μg of a tryptic digest per condition, the 3D strategy enabled the quantification of ~75% more proteins and even ~134% more peptides compared to the 2D strategy. Additionally, we could quantify ~50% more phosphoproteins by non-phosphorylated peptides, concurrently yielding insights into changes on the levels of protein expression and phosphorylation. Beside its sensitivity, our novel three-dimensional ERLIC-strategy has the potential for semi-automated sample processing rendering it a suitable future perspective for clinical, pre-clinical and biomedical research. Copyright © 2015. Published by Elsevier B.V.

  7. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    PubMed

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  8. An evolving computational platform for biological mass spectrometry: workflows, statistics and data mining with MASSyPup64.

    PubMed

    Winkler, Robert

    2015-01-01

    In biological mass spectrometry, crude instrumental data need to be converted into meaningful theoretical models. Several data processing and data evaluation steps are required to come to the final results. These operations are often difficult to reproduce, because of too specific computing platforms. This effect, known as 'workflow decay', can be diminished by using a standardized informatic infrastructure. Thus, we compiled an integrated platform, which contains ready-to-use tools and workflows for mass spectrometry data analysis. Apart from general unit operations, such as peak picking and identification of proteins and metabolites, we put a strong emphasis on the statistical validation of results and Data Mining. MASSyPup64 includes e.g., the OpenMS/TOPPAS framework, the Trans-Proteomic-Pipeline programs, the ProteoWizard tools, X!Tandem, Comet and SpiderMass. The statistical computing language R is installed with packages for MS data analyses, such as XCMS/metaXCMS and MetabR. The R package Rattle provides a user-friendly access to multiple Data Mining methods. Further, we added the non-conventional spreadsheet program teapot for editing large data sets and a command line tool for transposing large matrices. Individual programs, console commands and modules can be integrated using the Workflow Management System (WMS) taverna. We explain the useful combination of the tools by practical examples: (1) A workflow for protein identification and validation, with subsequent Association Analysis of peptides, (2) Cluster analysis and Data Mining in targeted Metabolomics, and (3) Raw data processing, Data Mining and identification of metabolites in untargeted Metabolomics. Association Analyses reveal relationships between variables across different sample sets. We present its application for finding co-occurring peptides, which can be used for target proteomics, the discovery of alternative biomarkers and protein-protein interactions. Data Mining derived models displayed a higher robustness and accuracy for classifying sample groups in targeted Metabolomics than cluster analyses. Random Forest models do not only provide predictive models, which can be deployed for new data sets, but also the variable importance. We demonstrate that the later is especially useful for tracking down significant signals and affected pathways in untargeted Metabolomics. Thus, Random Forest modeling supports the unbiased search for relevant biological features in Metabolomics. Our results clearly manifest the importance of Data Mining methods to disclose non-obvious information in biological mass spectrometry . The application of a Workflow Management System and the integration of all required programs and data in a consistent platform makes the presented data analyses strategies reproducible for non-expert users. The simple remastering process and the Open Source licenses of MASSyPup64 (http://www.bioprocess.org/massypup/) enable the continuous improvement of the system.

  9. Military Interoperable Digital Hospital Testbed (MIDHT)

    DTIC Science & Technology

    2015-12-01

    and Analyze the resulting technological impact on medication errors, pharmacists ’ productivity, nurse satisfactions/workflow and patient...medication errors, pharmacists productivity, nurse satisfactions/workflow and patient satisfaction. 1.1.1 Pharmacy Robotics Implementation...1.2 Research and analyze the resulting technological impact on medication errors, pharmacist productivity, nurse satisfaction/workflow and patient

  10. Provenance Storage, Querying, and Visualization in PBase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kianmajd, Parisa; Ludascher, Bertram; Missier, Paolo

    2015-01-01

    We present PBase, a repository for scientific workflows and their corresponding provenance information that facilitates the sharing of experiments among the scientific community. PBase is interoperable since it uses ProvONE, a standard provenance model for scientific workflows. Workflows and traces are stored in RDF, and with the support of SPARQL and the tree cover encoding, the repository provides a scalable infrastructure for querying the provenance data. Furthermore, through its user interface, it is possible to: visualize workflows and execution traces; visualize reachability relations within these traces; issue SPARQL queries; and visualize query results.

  11. Context-aware workflow management of mobile health applications.

    PubMed

    Salden, Alfons; Poortinga, Remco

    2006-01-01

    We propose a medical application management architecture that allows medical (IT) experts readily designing, developing and deploying context-aware mobile health (m-health) applications or services. In particular, we elaborate on how our application workflow management architecture enables chaining, coordinating, composing, and adapting context-sensitive medical application components such that critical Quality of Service (QoS) and Quality of Context (QoC) requirements typical for m-health applications or services can be met. This functional architectural support requires learning modules for distilling application-critical selection of attention and anticipation models. These models will help medical experts constructing and adjusting on-the-fly m-health application workflows and workflow strategies. We illustrate our context-aware workflow management paradigm for a m-health data delivery problem, in which optimal communication network configurations have to be determined.

  12. Experimental evaluation of a flexible I/O architecture for accelerating workflow engines in ultrascale environments

    DOE PAGES

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...

    2016-10-06

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  13. Prototype of Kepler Processing Workflows For Microscopy And Neuroinformatics

    PubMed Central

    Astakhov, V.; Bandrowski, A.; Gupta, A.; Kulungowski, A.W.; Grethe, J.S.; Bouwer, J.; Molina, T.; Rowley, V.; Penticoff, S.; Terada, M.; Wong, W.; Hakozaki, H.; Kwon, O.; Martone, M.E.; Ellisman, M.

    2016-01-01

    We report on progress of employing the Kepler workflow engine to prototype “end-to-end” application integration workflows that concern data coming from microscopes deployed at the National Center for Microscopy Imaging Research (NCMIR). This system is built upon the mature code base of the Cell Centered Database (CCDB) and integrated rule-oriented data system (IRODS) for distributed storage. It provides integration with external projects such as the Whole Brain Catalog (WBC) and Neuroscience Information Framework (NIF), which benefit from NCMIR data. We also report on specific workflows which spawn from main workflows and perform data fusion and orchestration of Web services specific for the NIF project. This “Brain data flow” presents a user with categorized information about sources that have information on various brain regions. PMID:28479932

  14. Workflow technology: the new frontier. How to overcome the barriers and join the future.

    PubMed

    Shefter, Susan M

    2006-01-01

    Hospitals are catching up to the business world in the introduction of technology systems that support professional practice and workflow. The field of case management is highly complex and interrelates with diverse groups in diverse locations. The last few years have seen the introduction of Workflow Technology Tools, which can improve the quality and efficiency of discharge planning by the case manager. Despite the availability of these wonderful new programs, many case managers are hesitant to adopt the new technology and workflow. For a myriad of reasons, a computer-based workflow system can seem like a brick wall. This article discusses, from a practitioner's point of view, how professionals can gain confidence and skill to get around the brick wall and join the future.

  15. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  16. Managing and Communicating Operational Workflow: Designing and Implementing an Electronic Outpatient Whiteboard.

    PubMed

    Steitz, Bryan D; Weinberg, Stuart T; Danciu, Ioana; Unertl, Kim M

    2016-01-01

    Healthcare team members in emergency department contexts have used electronic whiteboard solutions to help manage operational workflow for many years. Ambulatory clinic settings have highly complex operational workflow, but are still limited in electronic assistance to communicate and coordinate work activities. To describe and discuss the design, implementation, use, and ongoing evolution of a coordination and collaboration tool supporting ambulatory clinic operational workflow at Vanderbilt University Medical Center (VUMC). The outpatient whiteboard tool was initially designed to support healthcare work related to an electronic chemotherapy order-entry application. After a highly successful initial implementation in an oncology context, a high demand emerged across the organization for the outpatient whiteboard implementation. Over the past 10 years, developers have followed an iterative user-centered design process to evolve the tool. The electronic outpatient whiteboard system supports 194 separate whiteboards and is accessed by over 2800 distinct users on a typical day. Clinics can configure their whiteboards to support unique workflow elements. Since initial release, features such as immunization clinical decision support have been integrated into the system, based on requests from end users. The success of the electronic outpatient whiteboard demonstrates the usefulness of an operational workflow tool within the ambulatory clinic setting. Operational workflow tools can play a significant role in supporting coordination, collaboration, and teamwork in ambulatory healthcare settings.

  17. Can EO afford big data - an assessment of the temporal and monetary costs of existing and emerging big data workflows

    NASA Astrophysics Data System (ADS)

    Clements, Oliver; Walker, Peter

    2014-05-01

    The cost of working with extremely large data sets is an increasingly important issue within the Earth Observation community. From global coverage data at any resolution to small coverage data at extremely high resolution, the community has always produced big data. This will only increase as new sensors are deployed and their data made available. Over time standard workflows have emerged. These have been facilitated by the production and adoption of standard technologies. Groups such as the International Organisation for Standardisation (ISO) and the Open Geospatial Consortium (OGC) have been a driving force in this area for many years. The production of standard protocols and interfaces such as OPeNDAP, Web Coverage Service (WCS), Web Processing Service (WPS) and the newer emerging standards such as Web Coverage Processing Service (WCPS) have helped to galvanise these workflows. An example of a traditional workflow, assume a researcher wants to assess the temporal trend in chlorophyll concentration. This would involve a discovery phase, an acquisition phase, a processing phase and finally a derived product or analysis phase. Each element of this workflow has an associated temporal and monetary cost. Firstly the researcher would require a high bandwidth connection or the acquisition phase would take too long. Secondly the researcher must have their own expensive equipment for use in the processing phase. Both of these elements cost money and time. This can make the whole process prohibitive to scientists from the developing world or "citizen scientists" that do not have the processing infrastructure necessary. The use of emerging technologies can help improve both the monetary and time costs associated with these existing workflows. By utilising a WPS that is hosted at the same location as the data a user is able to apply processing to the data without needing their own processing infrastructure. This however limits the user to predefined processes that are made available by the data provider. The emerging OGC WCPS standard combined with big data analytics engines may provide a mechanism to improve this situation. The technology allows users to create their own queries using an SQL like query language and apply them over available large data archive, once again at the data providers end. This not only removes the processing cost whilst still allowing user defined processes it also reduces the bandwidth required, as only the final analysis or derived product needs to be downloaded. The maturity of the new technologies is a stage where their use should be justified by a quantitative assessment rather than simply by the fact that they are new developments. We will present a study of the time and cost requirements for a selection of existing workflows and then show how new/emerging standards and technologies can help to both reduce the cost to the user by shifting processing to the data, and reducing the required bandwidth for analysing large datasets, making analysis of big-data archives possible for a greater and more diverse audience.

  18. SU-F-T-251: The Quality Assurance for the Heavy Patient Load Department in the Developing Country: The Primary Experience of An Entire Workflow QA Process Management in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, J; Wang, J; Peng, J

    Purpose: To implement an entire workflow quality assurance (QA) process in the radiotherapy department and to reduce the error rates of radiotherapy based on the entire workflow management in the developing country. Methods: The entire workflow QA process management starts from patient registration to the end of last treatment including all steps through the entire radiotherapy process. Error rate of chartcheck is used to evaluate the the entire workflow QA process. Two to three qualified senior medical physicists checked the documents before the first treatment fraction of every patient. Random check of the treatment history during treatment was also performed.more » A total of around 6000 patients treatment data before and after implementing the entire workflow QA process were compared from May, 2014 to December, 2015. Results: A systemic checklist was established. It mainly includes patient’s registration, treatment plan QA, information exporting to OIS(Oncology Information System), documents of treatment QAand QA of the treatment history. The error rate derived from the chart check decreases from 1.7% to 0.9% after our the entire workflow QA process. All checked errors before the first treatment fraction were corrected as soon as oncologist re-confirmed them and reinforce staff training was accordingly followed to prevent those errors. Conclusion: The entire workflow QA process improved the safety, quality of radiotherapy in our department and we consider that our QA experience can be applicable for the heavily-loaded radiotherapy departments in developing country.« less

  19. An automated analysis workflow for optimization of force-field parameters using neutron scattering data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Vickie E.; Borreguero, Jose M.; Bhowmik, Debsindhu

    Graphical abstract: - Highlights: • An automated workflow to optimize force-field parameters. • Used the workflow to optimize force-field parameter for a system containing nanodiamond and tRNA. • The mechanism relies on molecular dynamics simulation and neutron scattering experimental data. • The workflow can be generalized to any other experimental and simulation techniques. - Abstract: Large-scale simulations and data analysis are often required to explain neutron scattering experiments to establish a connection between the fundamental physics at the nanoscale and data probed by neutrons. However, to perform simulations at experimental conditions it is critical to use correct force-field (FF) parametersmore » which are unfortunately not available for most complex experimental systems. In this work, we have developed a workflow optimization technique to provide optimized FF parameters by comparing molecular dynamics (MD) to neutron scattering data. We describe the workflow in detail by using an example system consisting of tRNA and hydrophilic nanodiamonds in a deuterated water (D{sub 2}O) environment. Quasi-elastic neutron scattering (QENS) data show a faster motion of the tRNA in the presence of nanodiamond than without the ND. To compare the QENS and MD results quantitatively, a proper choice of FF parameters is necessary. We use an efficient workflow to optimize the FF parameters between the hydrophilic nanodiamond and water by comparing to the QENS data. Our results show that we can obtain accurate FF parameters by using this technique. The workflow can be generalized to other types of neutron data for FF optimization, such as vibrational spectroscopy and spin echo.« less

  20. Decaf: Decoupled Dataflows for In Situ High-Performance Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreher, M.; Peterka, T.

    Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less

  1. Experiences and lessons learned from creating a generalized workflow for data publication of field campaign datasets

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S. K.; Ramachandran, R.; Deb, D.; Beaty, T.; Wright, D.

    2017-12-01

    This paper summarizes the workflow challenges of curating and publishing data produced from disparate data sources and provides a generalized workflow solution to efficiently archive data generated by researchers. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics and the Global Hydrology Resource Center (GHRC) DAAC have been collaborating on the development of a generalized workflow solution to efficiently manage the data publication process. The generalized workflow presented here are built on lessons learned from implementations of the workflow system. Data publication consists of the following steps: Accepting the data package from the data providers, ensuring the full integrity of the data files. Identifying and addressing data quality issues Assembling standardized, detailed metadata and documentation, including file level details, processing methodology, and characteristics of data files Setting up data access mechanisms Setup of the data in data tools and services for improved data dissemination and user experience Registering the dataset in online search and discovery catalogues Preserving the data location through Digital Object Identifiers (DOI) We will describe the steps taken to automate, and realize efficiencies to the above process. The goals of the workflow system are to reduce the time taken to publish a dataset, to increase the quality of documentation and metadata, and to track individual datasets through the data curation process. Utilities developed to achieve these goal will be described. We will also share metrics driven value of the workflow system and discuss the future steps towards creation of a common software framework.

  2. Structured recording of intraoperative surgical workflows

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Durstewitz, N.; Fischer, M.; Strauss, G.; Dietz, A.; Meixensberger, J.; Jannin, P.; Cleary, K.; Lemke, H. U.; Burgert, O.

    2006-03-01

    Surgical Workflows are used for the methodical and scientific analysis of surgical interventions. The approach described here is a step towards developing surgical assist systems based on Surgical Workflows and integrated control systems for the operating room of the future. This paper describes concepts and technologies for the acquisition of Surgical Workflows by monitoring surgical interventions and their presentation. Establishing systems which support the Surgical Workflow in operating rooms requires a multi-staged development process beginning with the description of these workflows. A formalized description of surgical interventions is needed to create a Surgical Workflow. This description can be used to analyze and evaluate surgical interventions in detail. We discuss the subdivision of surgical interventions into work steps regarding different levels of granularity and propose a recording scheme for the acquisition of manual surgical work steps from running interventions. To support the recording process during the intervention, we introduce a new software architecture. Core of the architecture is our Surgical Workflow editor that is intended to deal with the manifold, complex and concurrent relations during an intervention. Furthermore, a method for an automatic generation of graphs is shown which is able to display the recorded surgical work steps of the interventions. Finally we conclude with considerations about extensions of our recording scheme to close the gap to S-PACS systems. The approach was used to record 83 surgical interventions from 6 intervention types from 3 different surgical disciplines: ENT surgery, neurosurgery and interventional radiology. The interventions were recorded at the University Hospital Leipzig, Germany and at the Georgetown University Hospital, Washington, D.C., USA.

  3. A chromophoric study of 2-ethylhexyl p-methoxycinnamate

    NASA Astrophysics Data System (ADS)

    Alves, Leonardo F.; Gargano, Ricardo; Alcanfor, Silvia K. B.; Romeiro, Luiz A. S.; Martins, João B. L.

    2011-11-01

    Ultraviolet absorption spectra of 2-ethylhexyl p-methoxycinnamate have been recorded in different solvents and calculated using the time dependent density functional theory. The calculations were performed with the aid of B3LYP, PBE1PBE, M06, and PBEPBE functionals and 6-31+G(2d) basis set. The geometries were initially optimized using PM5 semiempirical method for the conformational search. The calculations of excited states were carried out using the time dependent with IEF-PCM solvent reaction field method. The experimental data were obtained in the wavelength range from 200 to 400 nm using 10 different solvents. The TD-PBE1PBE method shows the best agreement to the experimental results.

  4. Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows (Invited)

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Ramachandran, R.; Lynnes, C.

    2009-12-01

    A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues’ expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable “software appliance” to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish “talkoot” (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a “science story” in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using “service casts” and “interest casts” (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH’s Mining Workflow Composer and the open-source Active BPEL engine, and JPL’s SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the “sociological” problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).

  5. Extension of specification language for soundness and completeness of service workflow

    NASA Astrophysics Data System (ADS)

    Viriyasitavat, Wattana; Xu, Li Da; Bi, Zhuming; Sapsomboon, Assadaporn

    2018-05-01

    A Service Workflow is an aggregation of distributed services to fulfill specific functionalities. With ever increasing available services, the methodologies for the selections of the services against the given requirements become main research subjects in multiple disciplines. A few of researchers have contributed to the formal specification languages and the methods for model checking; however, existing methods have the difficulties to tackle with the complexity of workflow compositions. In this paper, we propose to formalize the specification language to reduce the complexity of the workflow composition. To this end, we extend a specification language with the consideration of formal logic, so that some effective theorems can be derived for the verification of syntax, semantics, and inference rules in the workflow composition. The logic-based approach automates compliance checking effectively. The Service Workflow Specification (SWSpec) has been extended and formulated, and the soundness, completeness, and consistency of SWSpec applications have been verified; note that a logic-based SWSpec is mandatory for the development of model checking. The application of the proposed SWSpec has been demonstrated by the examples with the addressed soundness, completeness, and consistency.

  6. Development of a novel imaging informatics-based system with an intelligent workflow engine (IWEIS) to support imaging-based clinical trials

    PubMed Central

    Wang, Ximing; Liu, Brent J; Martinez, Clarisa; Zhang, Xuejun; Winstein, Carolee J

    2015-01-01

    Imaging based clinical trials can benefit from a solution to efficiently collect, analyze, and distribute multimedia data at various stages within the workflow. Currently, the data management needs of these trials are typically addressed with custom-built systems. However, software development of the custom- built systems for versatile workflows can be resource-consuming. To address these challenges, we present a system with a workflow engine for imaging based clinical trials. The system enables a project coordinator to build a data collection and management system specifically related to study protocol workflow without programming. Web Access to DICOM Objects (WADO) module with novel features is integrated to further facilitate imaging related study. The system was initially evaluated by an imaging based rehabilitation clinical trial. The evaluation shows that the cost of the development of system can be much reduced compared to the custom-built system. By providing a solution to customize a system and automate the workflow, the system will save on development time and reduce errors especially for imaging clinical trials. PMID:25870169

  7. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  8. Optimizing high performance computing workflow for protein functional annotation

    PubMed Central

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-01-01

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  9. COSMOS: Python library for massively parallel workflows

    PubMed Central

    Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.

    2014-01-01

    Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428

  10. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.

    PubMed

    Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.

  11. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid

    PubMed Central

    Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617

  12. Using Kepler for Tool Integration in Microarray Analysis Workflows.

    PubMed

    Gan, Zhuohui; Stowe, Jennifer C; Altintas, Ilkay; McCulloch, Andrew D; Zambon, Alexander C

    Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.

  13. COSMOS: Python library for massively parallel workflows.

    PubMed

    Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J

    2014-10-15

    Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  14. The impact of computerized provider order entry systems on inpatient clinical workflow: a literature review.

    PubMed

    Niazkhani, Zahra; Pirnejad, Habibollah; Berg, Marc; Aarts, Jos

    2009-01-01

    Previous studies have shown the importance of workflow issues in the implementation of CPOE systems and patient safety practices. To understand the impact of CPOE on clinical workflow, we developed a conceptual framework and conducted a literature search for CPOE evaluations between 1990 and June 2007. Fifty-one publications were identified that disclosed mixed effects of CPOE systems. Among the frequently reported workflow advantages were the legible orders, remote accessibility of the systems, and the shorter order turnaround times. Among the frequently reported disadvantages were the time-consuming and problematic user-system interactions, and the enforcement of a predefined relationship between clinical tasks and between providers. Regarding the diversity of findings in the literature, we conclude that more multi-method research is needed to explore CPOE's multidimensional and collective impact on especially collaborative workflow.

  15. Load-sensitive dynamic workflow re-orchestration and optimisation for faster patient healthcare.

    PubMed

    Meli, Christopher L; Khalil, Ibrahim; Tari, Zahir

    2014-01-01

    Hospital waiting times are considerably long, with no signs of reducing any-time soon. A number of factors including population growth, the ageing population and a lack of new infrastructure are expected to further exacerbate waiting times in the near future. In this work, we show how healthcare services can be modelled as queueing nodes, together with healthcare service workflows, such that these workflows can be optimised during execution in order to reduce patient waiting times. Services such as X-ray, computer tomography, and magnetic resonance imaging often form queues, thus, by taking into account the waiting times of each service, the workflow can be re-orchestrated and optimised. Experimental results indicate average waiting time reductions are achievable by optimising workflows using dynamic re-orchestration. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Scientific workflows as productivity tools for drug discovery.

    PubMed

    Shon, John; Ohkawa, Hitomi; Hammer, Juergen

    2008-05-01

    Large pharmaceutical companies annually invest tens to hundreds of millions of US dollars in research informatics to support their early drug discovery processes. Traditionally, most of these investments are designed to increase the efficiency of drug discovery. The introduction of do-it-yourself scientific workflow platforms has enabled research informatics organizations to shift their efforts toward scientific innovation, ultimately resulting in a possible increase in return on their investments. Unlike the handling of most scientific data and application integration approaches, researchers apply scientific workflows to in silico experimentation and exploration, leading to scientific discoveries that lie beyond automation and integration. This review highlights some key requirements for scientific workflow environments in the pharmaceutical industry that are necessary for increasing research productivity. Examples of the application of scientific workflows in research and a summary of recent platform advances are also provided.

  17. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  18. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  19. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  20. Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    NASA Technical Reports Server (NTRS)

    Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan

    2010-01-01

    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.

  1. Flexible Workflow Software enables the Management of an Increased Volume and Heterogeneity of Sensors, and evolves with the Expansion of Complex Ocean Observatory Infrastructures.

    NASA Astrophysics Data System (ADS)

    Tomlin, M. C.; Jenkyns, R.

    2015-12-01

    Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.

  2. Rethinking Clinical Workflow.

    PubMed

    Schlesinger, Joseph J; Burdick, Kendall; Baum, Sarah; Bellomy, Melissa; Mueller, Dorothee; MacDonald, Alistair; Chern, Alex; Chrouser, Kristin; Burger, Christie

    2018-03-01

    The concept of clinical workflow borrows from management and leadership principles outside of medicine. The only way to rethink clinical workflow is to understand the neuroscience principles that underlie attention and vigilance. With any implementation to improve practice, there are human factors that can promote or impede progress. Modulating the environment and working as a team to take care of patients is paramount. Clinicians must continually rethink clinical workflow, evaluate progress, and understand that other industries have something to offer. Then, novel approaches can be implemented to take the best care of patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A cognitive task analysis of a visual analytic workflow: Exploring molecular interaction networks in systems biology.

    PubMed

    Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias

    2011-03-21

    Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features” consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.

  4. Managing and Communicating Operational Workflow

    PubMed Central

    Weinberg, Stuart T.; Danciu, Ioana; Unertl, Kim M.

    2016-01-01

    Summary Background Healthcare team members in emergency department contexts have used electronic whiteboard solutions to help manage operational workflow for many years. Ambulatory clinic settings have highly complex operational workflow, but are still limited in electronic assistance to communicate and coordinate work activities. Objective To describe and discuss the design, implementation, use, and ongoing evolution of a coordination and collaboration tool supporting ambulatory clinic operational workflow at Vanderbilt University Medical Center (VUMC). Methods The outpatient whiteboard tool was initially designed to support healthcare work related to an electronic chemotherapy order-entry application. After a highly successful initial implementation in an oncology context, a high demand emerged across the organization for the outpatient whiteboard implementation. Over the past 10 years, developers have followed an iterative user-centered design process to evolve the tool. Results The electronic outpatient whiteboard system supports 194 separate whiteboards and is accessed by over 2800 distinct users on a typical day. Clinics can configure their whiteboards to support unique workflow elements. Since initial release, features such as immunization clinical decision support have been integrated into the system, based on requests from end users. Conclusions The success of the electronic outpatient whiteboard demonstrates the usefulness of an operational workflow tool within the ambulatory clinic setting. Operational workflow tools can play a significant role in supporting coordination, collaboration, and teamwork in ambulatory healthcare settings. PMID:27081407

  5. Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores

    NASA Astrophysics Data System (ADS)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.

  6. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Barriers to effective, safe communication and workflow between nurses and non-consultant hospital doctors during out-of-hours.

    PubMed

    Brady, Anne-Marie; Byrne, Gobnait; Quirke, Mary Brigid; Lynch, Aine; Ennis, Shauna; Bhangu, Jaspreet; Prendergast, Meabh

    2017-11-01

    This study aimed to evaluate the nature and type of communication and workflow arrangements between nurses and doctors out-of-hours (OOH). Effective communication and workflow arrangements between nurses and doctors are essential to minimize risk in hospital settings, particularly in the out-of-hour's period. Timely patient flow is a priority for all healthcare organizations and the quality of communication and workflow arrangements influences patient safety. Qualitative descriptive design and data collection methods included focus groups and individual interviews. A 500 bed tertiary referral acute hospital in Ireland. Junior and senior Non-Consultant Hospital Doctors, staff nurses and nurse managers. Both nurses and doctors acknowledged the importance of good interdisciplinary communication and collaborative working, in sustaining effective workflow and enabling a supportive working environment and patient safety. Indeed, issues of safety and missed care OOH were found to be primarily due to difficulties of communication and workflow. Medical workflow OOH is often dependent on cues and communication to/from nursing. However, communication systems and, in particular the bleep system, considered central to the process of communication between doctors and nurses OOH, can contribute to workflow challenges and increased staff stress. It was reported as commonplace for routine work, that should be completed during normal hours, to fall into OOH when resources were most limited, further compounding risk to patient safety. Enhancement of communication strategies between nurses and doctors has the potential to remove barriers to effective decision-making and patient flow. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  8. Process improvement for the safe delivery of multidisciplinary-executed treatments-A case in Y-90 microspheres therapy.

    PubMed

    Cai, Bin; Altman, Michael B; Garcia-Ramirez, Jose; LaBrash, Jason; Goddu, S Murty; Mutic, Sasa; Parikh, Parag J; Olsen, Jeffrey R; Saad, Nael; Zoberi, Jacqueline E

    To develop a safe and robust workflow for yttrium-90 (Y-90) radioembolization procedures in a multidisciplinary team environment. A generalized Define-Measure-Analyze-Improve-Control (DMAIC)-based approach to process improvement was applied to a Y-90 radioembolization workflow. In the first DMAIC cycle, events with the Y-90 workflow were defined and analyzed. To improve the workflow, a web-based interactive electronic white board (EWB) system was adopted as the central communication platform and information processing hub. The EWB-based Y-90 workflow then underwent a second DMAIC cycle. Out of 245 treatments, three misses that went undetected until treatment initiation were recorded over a period of 21 months, and root-cause-analysis was performed to determine causes of each incident and opportunities for improvement. The EWB-based Y-90 process was further improved via new rules to define reliable sources of information as inputs into the planning process, as well as new check points to ensure this information was communicated correctly throughout the process flow. After implementation of the revised EWB-based Y-90 workflow, after two DMAIC-like cycles, there were zero misses out of 153 patient treatments in 1 year. The DMAIC-based approach adopted here allowed the iterative development of a robust workflow to achieve an adaptable, event-minimizing planning process despite a complex setting which requires the participation of multiple teams for Y-90 microspheres therapy. Implementation of such a workflow using the EWB or similar platform with a DMAIC-based process improvement approach could be expanded to other treatment procedures, especially those requiring multidisciplinary management. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  9. Knowledge Extraction and Semantic Annotation of Text from the Encyclopedia of Life

    PubMed Central

    Thessen, Anne E.; Parr, Cynthia Sims

    2014-01-01

    Numerous digitization and ontological initiatives have focused on translating biological knowledge from narrative text to machine-readable formats. In this paper, we describe two workflows for knowledge extraction and semantic annotation of text data objects featured in an online biodiversity aggregator, the Encyclopedia of Life. One workflow tags text with DBpedia URIs based on keywords. Another workflow finds taxon names in text using GNRD for the purpose of building a species association network. Both workflows work well: the annotation workflow has an F1 Score of 0.941 and the association algorithm has an F1 Score of 0.885. Existing text annotators such as Terminizer and DBpedia Spotlight performed well, but require some optimization to be useful in the ecology and evolution domain. Important future work includes scaling up and improving accuracy through the use of distributional semantics. PMID:24594988

  10. A framework for service enterprise workflow simulation with multi-agents cooperation

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun

    2013-11-01

    Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.

  11. Combining high-throughput MALDI-TOF mass spectrometry and isoelectric focusing gel electrophoresis for virtual 2D gel-based proteomics.

    PubMed

    Lohnes, Karen; Quebbemann, Neil R; Liu, Kate; Kobzeff, Fred; Loo, Joseph A; Ogorzalek Loo, Rachel R

    2016-07-15

    The virtual two-dimensional gel electrophoresis/mass spectrometry (virtual 2D gel/MS) technology combines the premier, high-resolution capabilities of 2D gel electrophoresis with the sensitivity and high mass accuracy of mass spectrometry (MS). Intact proteins separated by isoelectric focusing (IEF) gel electrophoresis are imaged from immobilized pH gradient (IPG) polyacrylamide gels (the first dimension of classic 2D-PAGE) by matrix-assisted laser desorption/ionization (MALDI) MS. Obtaining accurate intact masses from sub-picomole-level proteins embedded in 2D-PAGE gels or in IPG strips is desirable to elucidate how the protein of one spot identified as protein 'A' on a 2D gel differs from the protein of another spot identified as the same protein, whenever tryptic peptide maps fail to resolve the issue. This task, however, has been extremely challenging. Virtual 2D gel/MS provides access to these intact masses. Modifications to our matrix deposition procedure improve the reliability with which IPG gels can be prepared; the new procedure is described. Development of this MALDI MS imaging (MSI) method for high-throughput MS with integrated 'top-down' MS to elucidate protein isoforms from complex biological samples is described and it is demonstrated that a 4-cm IPG gel segment can now be imaged in approximately 5min. Gel-wide chemical and enzymatic methods with further interrogation by MALDI MS/MS provide identifications, sequence-related information, and post-translational/transcriptional modification information. The MSI-based virtual 2D gel/MS platform may potentially link the benefits of 'top-down' and 'bottom-up' proteomics. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Alteration of plasma membrane-bound redox systems of iron deficient pea roots by chitosan.

    PubMed

    Meisrimler, Claudia-Nicole; Planchon, Sebastien; Renaut, Jenny; Sergeant, Kjell; Lüthje, Sabine

    2011-08-12

    Iron is essential for all living organisms and plays a crucial role in pathogenicity. This study presents the first proteome analysis of plasma membranes isolated from pea roots. Protein profiles of four different samples (+Fe, +Fe/Chitosan, -Fe, and -Fe/Chitosan) were compared by native IEF-PAGE combined with in-gel activity stains and DIGE. Using DIGE, 89 proteins of interest were detected in plasma membrane fractions. Data revealed a differential abundance of several spots in all samples investigated. In comparison to the control and -FeCh the abundance of six protein spots increased whereas 56 spots decreased in +FeCh. Altered protein spots were analyzed by MALDI-TOF-TOF mass spectrometry. Besides stress-related proteins, transport proteins and redox enzymes were identified. Activity stains after native PAGE and spectrophotometric measurements demonstrated induction of a ferric-chelate reductase (-Fe) and a putative respiratory burst oxidase homolog (-FeCh). However, the activity of the ferric-chelate reductase decreased in -Fe plants after elicitor treatment. The activity of plasma membrane-bound class III peroxidases increased after elicitor treatment and decreased under iron-deficiency, whereas activity of quinone reductases decreased mostly after elicitor treatment. Possible functions of proteins identified and reasons for a weakened pathogen response of iron-deficient plants were discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Methods for investigating biosurfactants and bioemulsifiers: a review.

    PubMed

    Satpute, Surekha K; Banpurkar, Arun G; Dhakephalkar, Prashant K; Banat, Ibrahim M; Chopade, Balu A

    2010-06-01

    Microorganisms produce biosurfactant (BS)/bioemulsifier (BE) with wide structural and functional diversity which consequently results in the adoption of different techniques to investigate these diverse amphiphilic molecules. This review aims to compile information on different microbial screening methods, surface active products extraction procedures, and analytical terminologies used in this field. Different methods for screening microbial culture broth or cell biomass for surface active compounds production are also presented and their possible advantages and disadvantages highlighted. In addition, the most common methods for purification, detection, and structure determination for a wide range of BS and BE are introduced. Simple techniques such as precipitation using acetone, ammonium sulphate, solvent extraction, ultrafiltration, ion exchange, dialysis, ultrafiltration, lyophilization, isoelectric focusing (IEF), and thin layer chromatography (TLC) are described. Other more elaborate techniques including high pressure liquid chromatography (HPLC), infra red (IR), gas chromatography-mass spectroscopy (GC-MS), nuclear magnetic resonance (NMR), and fast atom bombardment mass spectroscopy (FAB-MS), protein digestion and amino acid sequencing are also elucidated. Various experimental strategies including static light scattering and hydrodynamic characterization for micelles have been discussed. A combination of various analytical methods are often essential in this area of research and a numbers of trials and errors to isolate, purify and characterize various surface active agents are required. This review introduces the various methodologies that are indispensable for studying biosurfactants and bioemulsifiers.

  14. Surface modified capillary electrophoresis combined with in solution isoelectric focusing and MALDI-TOF/TOF MS: a gel-free multidimensional electrophoresis approach for proteomic profiling--exemplified on human follicular fluid.

    PubMed

    Hanrieder, Jörg; Zuberovic, Aida; Bergquist, Jonas

    2009-04-24

    Development of miniaturized analytical tools continues to be of great interest to face the challenges in proteomic analysis of complex biological samples such as human body fluids. In the light of these challenges, special emphasis is put on the speed and simplicity of newly designed technological approaches as well as the need for cost efficiency and low sample consumption. In this study, we present an alternative multidimensional bottom-up approach for proteomic profiling for fast, efficient and sensitive protein analysis in complex biological matrices. The presented setup was based on sample pre-fractionation using microscale in solution isoelectric focusing (IEF) followed by tryptic digestion and subsequent capillary electrophoresis (CE) coupled off-line to matrix assisted laser desorption/ionization time of flight tandem mass spectrometry (MALDI TOF MS/MS). For high performance CE-separation, PolyE-323 modified capillaries were applied to minimize analyte-wall interactions. The potential of the analytical setup was demonstrated on human follicular fluid (hFF) representing a typical complex human body fluid with clinical implication. The obtained results show significant identification of 73 unique proteins (identified at 95% significance level), including mostly acute phase proteins but also protein identities that are well known to be extensively involved in follicular development.

  15. Identification of ovarian cancer-associated proteins in symptomatic women: A novel method for semi-quantitative plasma proteomics.

    PubMed

    Shield-Artin, Kristy L; Bailey, Mark J; Oliva, Karen; Liovic, Ana K; Barker, Gillian; Dellios, Nicole L; Reisman, Simone; Ayhan, Mustafa; Rice, Gregory E

    2012-04-01

    To evaluate the utility of an enhanced biomarker discovery approach in order to identify potential biomarkers relevant to ovarian cancer detection. We combined immuno-depletion, liquid-phase IEF, 1D-DIGE, MALDI-TOF/MS and LC-MS/MS to identify differentially expressed proteins in the plasma of symptomatic ovarian cancer patients, stratified by stage, compared to samples obtained from normal subjects. We demonstrate that this approach is a practical alternative to traditional 2D gel techniques and that it has some advantages, most notably increased protein capacity. Proteins were identified in all 76 bands excised from the gels in this project and confirmed the cancer-associated expression of several well-established biomarkers of ovarian cancer. These included C-reactive protein (CRP), haptoglobin, alpha-2 macroglobulin and A1A2. We also identified new ovarian cancer candidate biomarkers, Protein S100-A9 (S100A9) and multimerin-2. The cancer-associated differential expression of CRP and S100A9 was further confirmed by Western blot and ELISA. The methods developed in this study allow for the increased loading of plasma proteins into the analytical stream when compared to traditional 2D-DIGE. This increased protein identification sensitivity allowed us to identify new putative ovarian cancer biomarkers. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  17. Reconfigurable Software for Mission Operations

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2014-01-01

    We developed software that provides flexibility to mission organizations through modularity and composability. Modularity enables removal and addition of functionality through the installation of plug-ins. Composability enables users to assemble software from pre-built reusable objects, thus reducing or eliminating the walls associated with traditional application architectures and enabling unique combinations of functionality. We have used composable objects to reduce display build time, create workflows, and build scenarios to test concepts for lunar roving operations. The software is open source, and may be downloaded from https:github.comnasamct.

  18. A Six‐Stage Workflow for Robust Application of Systems Pharmacology

    PubMed Central

    Gadkar, K; Kirouac, DC; Mager, DE; van der Graaf, PH

    2016-01-01

    Quantitative and systems pharmacology (QSP) is increasingly being applied in pharmaceutical research and development. One factor critical to the ultimate success of QSP is the establishment of commonly accepted language, technical criteria, and workflows. We propose an integrated workflow that bridges conceptual objectives with underlying technical detail to support the execution, communication, and evaluation of QSP projects. PMID:27299936

  19. Using Workflow Diagrams to Address Hand Hygiene in Pediatric Long-Term Care Facilities1

    PubMed Central

    Carter, Eileen J.; Cohen, Bevin; Murray, Meghan T.; Saiman, Lisa; Larson, Elaine L.

    2015-01-01

    Hand hygiene (HH) in pediatric long-term care settings has been found to be sub-optimal. Multidisciplinary teams at three pediatric long-term care facilities developed step-by-step workflow diagrams of commonly performed tasks highlighting HH opportunities. Diagrams were validated through observation of tasks and concurrent diagram assessment. Facility teams developed six workflow diagrams that underwent 22 validation observations. Four main themes emerged: 1) diagram specificity, 2) wording and layout, 3) timing of HH indications, and 4) environmental hygiene. The development of workflow diagrams is an opportunity to identify and address the complexity of HH in pediatric long-term care facilities. PMID:25773517

  20. Radiography for intensive care: participatory process analysis in a PACS-equipped and film/screen environment

    NASA Astrophysics Data System (ADS)

    Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes

    2002-05-01

    If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.

  1. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  2. Development of the workflow kine systems for support on KAIZEN.

    PubMed

    Mizuno, Yuki; Ito, Toshihiko; Yoshikawa, Toru; Yomogida, Satoshi; Morio, Koji; Sakai, Kazuhiro

    2012-01-01

    In this paper, we introduce the new workflow line system consisted of the location and image recording, which led to the acquisition of workflow information and the analysis display. From the results of workflow line investigation, we considered the anticipated effects and the problems on KAIZEN. Workflow line information included the location information and action contents information. These technologies suggest the viewpoints to help improvement, for example, exclusion of useless movement, the redesign of layout and the review of work procedure. Manufacturing factory, it was clear that there was much movement from the standard operation place and accumulation residence time. The following was shown as a result of this investigation, to be concrete, the efficient layout was suggested by this system. In the case of the hospital, similarly, it is pointed out that the workflow has the problem of layout and setup operations based on the effective movement pattern of the experts. This system could adapt to routine work, including as well as non-routine work. By the development of this system which can fit and adapt to industrial diversification, more effective "visual management" (visualization of work) is expected in the future.

  3. [Integration of the radiotherapy irradiation planning in the digital workflow].

    PubMed

    Röhner, F; Schmucker, M; Henne, K; Momm, F; Bruggmoser, G; Grosu, A-L; Frommhold, H; Heinemann, F E

    2013-02-01

    At the Clinic of Radiotherapy at the University Hospital Freiburg, all relevant workflow is paperless. After implementing the Operating Schedule System (OSS) as a framework, all processes are being implemented into the departmental system MOSAIQ. Designing a digital workflow for radiotherapy irradiation planning is a large challenge, it requires interdisciplinary expertise and therefore the interfaces between the professions also have to be interdisciplinary. For every single step of radiotherapy irradiation planning, distinct responsibilities have to be defined and documented. All aspects of digital storage, backup and long-term availability of data were considered and have already been realized during the OSS project. After an analysis of the complete workflow and the statutory requirements, a detailed project plan was designed. In an interdisciplinary workgroup, problems were discussed and a detailed flowchart was developed. The new functionalities were implemented in a testing environment by the Clinical and Administrative IT Department (CAI). After extensive tests they were integrated into the new modular department system. The Clinic of Radiotherapy succeeded in realizing a completely digital workflow for radiotherapy irradiation planning. During the testing phase, our digital workflow was examined and afterwards was approved by the responsible authority.

  4. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  6. Ab initio chemical safety assessment: A workflow based on exposure considerations and non-animal methods.

    PubMed

    Berggren, Elisabet; White, Andrew; Ouedraogo, Gladys; Paini, Alicia; Richarz, Andrea-Nicole; Bois, Frederic Y; Exner, Thomas; Leite, Sofia; Grunsven, Leo A van; Worth, Andrew; Mahony, Catherine

    2017-11-01

    We describe and illustrate a workflow for chemical safety assessment that completely avoids animal testing. The workflow, which was developed within the SEURAT-1 initiative, is designed to be applicable to cosmetic ingredients as well as to other types of chemicals, e.g. active ingredients in plant protection products, biocides or pharmaceuticals. The aim of this work was to develop a workflow to assess chemical safety without relying on any animal testing, but instead constructing a hypothesis based on existing data, in silico modelling, biokinetic considerations and then by targeted non-animal testing. For illustrative purposes, we consider a hypothetical new ingredient x as a new component in a body lotion formulation. The workflow is divided into tiers in which points of departure are established through in vitro testing and in silico prediction, as the basis for estimating a safe external dose in a repeated use scenario. The workflow includes a series of possible exit (decision) points, with increasing levels of confidence, based on the sequential application of the Threshold of Toxicological (TTC) approach, read-across, followed by an "ab initio" assessment, in which chemical safety is determined entirely by new in vitro testing and in vitro to in vivo extrapolation by means of mathematical modelling. We believe that this workflow could be applied as a tool to inform targeted and toxicologically relevant in vitro testing, where necessary, and to gain confidence in safety decision making without the need for animal testing.

  7. Correlative Microscopy of Vitreous Sections Provides Insights into BAR-Domain Organization In Situ.

    PubMed

    Bharat, Tanmay A M; Hoffmann, Patrick C; Kukulski, Wanda

    2018-04-10

    Electron microscopy imaging of macromolecular complexes in their native cellular context is limited by the inherent difficulty to acquire high-resolution tomographic data from thick cells and to specifically identify elusive structures within crowded cellular environments. Here, we combined cryo-fluorescence microscopy with electron cryo-tomography of vitreous sections into a coherent correlative microscopy workflow, ideal for detection and structural analysis of elusive protein assemblies in situ. We used this workflow to address an open question on BAR-domain coating of yeast plasma membrane compartments known as eisosomes. BAR domains can sense or induce membrane curvature, and form scaffold-like membrane coats in vitro. Our results demonstrate that in cells, the BAR protein Pil1 localizes to eisosomes of varying membrane curvature. Sub-tomogram analysis revealed a dense protein coat on curved eisosomes, which was not present on shallow eisosomes, indicating that while BAR domains can assemble at shallow membranes in vivo, scaffold formation is tightly coupled to curvature generation. Copyright © 2018 MRC Laboratory of Molecular Biology. Published by Elsevier Ltd.. All rights reserved.

  8. The impact of e-prescribing on prescriber and staff time in ambulatory care clinics: a time motion study.

    PubMed

    Hollingworth, William; Devine, Emily Beth; Hansen, Ryan N; Lawless, Nathan M; Comstock, Bryan A; Wilson-Norton, Jennifer L; Tharp, Kathleen L; Sullivan, Sean D

    2007-01-01

    Electronic prescribing has improved the quality and safety of care. One barrier preventing widespread adoption is the potential detrimental impact on workflow. We used time-motion techniques to compare prescribing times at three ambulatory care sites that used paper-based prescribing, desktop, or laptop e-prescribing. An observer timed all prescriber (n = 27) and staff (n = 42) tasks performed during a 4-hour period. At the sites with optional e-prescribing >75% of prescription-related events were performed electronically. Prescribers at e-prescribing sites spent less time writing, but time-savings were offset by increased computer tasks. After adjusting for site, prescriber and prescription type, e-prescribing tasks took marginally longer than hand written prescriptions (12.0 seconds; -1.6, 25.6 CI). Nursing staff at the e-prescribing sites spent longer on computer tasks (5.4 minutes/hour; 0.0, 10.7 CI). E-prescribing was not associated with an increase in combined computer and writing time for prescribers. If carefully implemented, e-prescribing will not greatly disrupt workflow.

  9. Virtual planning for craniomaxillofacial surgery--7 years of experience.

    PubMed

    Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo

    2014-07-01

    Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. From days to hours: reporting clinically actionable variants from whole genome sequencing.

    PubMed

    Middha, Sumit; Baheti, Saurabh; Hart, Steven N; Kocher, Jean-Pierre A

    2014-01-01

    As the cost of whole genome sequencing (WGS) decreases, clinical laboratories will be looking at broadly adopting this technology to screen for variants of clinical significance. To fully leverage this technology in a clinical setting, results need to be reported quickly, as the turnaround rate could potentially impact patient care. The latest sequencers can sequence a whole human genome in about 24 hours. However, depending on the computing infrastructure available, the processing of data can take several days, with the majority of computing time devoted to aligning reads to genomics regions that are to date not clinically interpretable. In an attempt to accelerate the reporting of clinically actionable variants, we have investigated the utility of a multi-step alignment algorithm focused on aligning reads and calling variants in genomic regions of clinical relevance prior to processing the remaining reads on the whole genome. This iterative workflow significantly accelerates the reporting of clinically actionable variants with no loss of accuracy when compared to genotypes obtained with the OMNI SNP platform or to variants detected with a standard workflow that combines Novoalign and GATK.

  11. Characterizing Phage Genomes for Therapeutic Applications

    PubMed Central

    Philipson, Casandra W.; Voegtly, Logan J.; Lueder, Matthew R.; Long, Kyle A.; Rice, Gregory K.; Frey, Kenneth G.; Biswas, Biswajit; Cer, Regina Z.; Hamilton, Theron; Bishop-Lilly, Kimberly A.

    2018-01-01

    Multi-drug resistance is increasing at alarming rates. The efficacy of phage therapy, treating bacterial infections with bacteriophages alone or in combination with traditional antibiotics, has been demonstrated in emergency cases in the United States and in other countries, however remains to be approved for wide-spread use in the US. One limiting factor is a lack of guidelines for assessing the genomic safety of phage candidates. We present the phage characterization workflow used by our team to generate data for submitting phages to the Federal Drug Administration (FDA) for authorized use. Essential analysis checkpoints and warnings are detailed for obtaining high-quality genomes, excluding undesirable candidates, rigorously assessing a phage genome for safety and evaluating sequencing contamination. This workflow has been developed in accordance with community standards for high-throughput sequencing of viral genomes as well as principles for ideal phages used for therapy. The feasibility and utility of the pipeline is demonstrated on two new phage genomes that meet all safety criteria. We propose these guidelines as a minimum standard for phages being submitted to the FDA for review as investigational new drug candidates. PMID:29642590

  12. A robust ambient temperature collection and stabilization strategy: Enabling worldwide functional studies of the human microbiome

    PubMed Central

    Anderson, Ericka L.; Li, Weizhong; Klitgord, Niels; Highlander, Sarah K.; Dayrit, Mark; Seguritan, Victor; Yooseph, Shibu; Biggs, William; Venter, J. Craig; Nelson, Karen E.; Jones, Marcus B.

    2016-01-01

    As reports on possible associations between microbes and the host increase in number, more meaningful interpretations of this information require an ability to compare data sets across studies. This is dependent upon standardization of workflows to ensure comparability both within and between studies. Here we propose the standard use of an alternate collection and stabilization method that would facilitate such comparisons. The DNA Genotek OMNIgene∙Gut Stool Microbiome Kit was compared to the currently accepted community standard of freezing to store human stool samples prior to whole genome sequencing (WGS) for microbiome studies. This stabilization and collection device allows for ambient temperature storage, automation, and ease of shipping/transfer of samples. The device permitted the same data reproducibility as with frozen samples, and yielded higher recovery of nucleic acids. Collection and stabilization of stool microbiome samples with the DNA Genotek collection device, combined with our extraction and WGS, provides a robust, reproducible workflow that enables standardized global collection, storage, and analysis of stool for microbiome studies. PMID:27558918

  13. MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.

    PubMed

    Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang

    2005-01-01

    MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.

  14. Physician activity during outpatient visits and subjective workload.

    PubMed

    Calvitti, Alan; Hochheiser, Harry; Ashfaq, Shazia; Bell, Kristin; Chen, Yunan; El Kareh, Robert; Gabuzda, Mark T; Liu, Lin; Mortensen, Sara; Pandey, Braj; Rick, Steven; Street, Richard L; Weibel, Nadir; Weir, Charlene; Agha, Zia

    2017-05-01

    We describe methods for capturing and analyzing EHR use and clinical workflow of physicians during outpatient encounters and relating activity to physicians' self-reported workload. We collected temporally-resolved activity data including audio, video, EHR activity, and eye-gaze along with post-visit assessments of workload. These data are then analyzed through a combination of manual content analysis and computational techniques to temporally align streams, providing a range of process measures of EHR usage, clinical workflow, and physician-patient communication. Data was collected from primary care and specialty clinics at the Veterans Administration San Diego Healthcare System and UCSD Health, who use Electronic Health Record (EHR) platforms, CPRS and Epic, respectively. Grouping visit activity by physician, site, specialty, and patient status enables rank-ordering activity factors by their correlation to physicians' subjective work-load as captured by NASA Task Load Index survey. We developed a coding scheme that enabled us to compare timing studies between CPRS and Epic and extract patient and visit complexity profiles. We identified similar patterns of EHR use and navigation at the 2 sites despite differences in functions, user interfaces and consequent coded representations. Both sites displayed similar proportions of EHR function use and navigation, and distribution of visit length, proportion of time physicians attended to EHRs (gaze), and subjective work-load as measured by the task load survey. We found that visit activity was highly variable across individual physicians, and the observed activity metrics ranged widely as correlates to subjective workload. We discuss implications of our study for methodology, clinical workflow and EHR redesign. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. SciDAC-Data, A Project to Enabling Data Driven Modeling of Exascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, M.; Ding, P.; Aliaga, L.

    The SciDAC-Data project is a DOE funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab Data Center on the organization, movement, and consumption of High Energy Physics data. The project will analyze the analysis patterns and data organization that have been used by the NOvA, MicroBooNE, MINERvA and other experiments, to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations aremore » designed to address questions of data handling, cache optimization and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership class exascale computing facilities. We will address the use of the SciDAC-Data distributions acquired from Fermilab Data Center’s analysis workflows and corresponding to around 71,000 HEP jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in HPC environments. In particular we describe in detail how the Sequential Access via Metadata (SAM) data handling system in combination with the dCache/Enstore based data archive facilities have been analyzed to develop the radically different models of the analysis of HEP data. We present how the simulation may be used to analyze the impact of design choices in archive facilities.« less

  16. Automated selected reaction monitoring data analysis workflow for large-scale targeted proteomic studies.

    PubMed

    Surinova, Silvia; Hüttenhain, Ruth; Chang, Ching-Yun; Espona, Lucia; Vitek, Olga; Aebersold, Ruedi

    2013-08-01

    Targeted proteomics based on selected reaction monitoring (SRM) mass spectrometry is commonly used for accurate and reproducible quantification of protein analytes in complex biological mixtures. Strictly hypothesis-driven, SRM assays quantify each targeted protein by collecting measurements on its peptide fragment ions, called transitions. To achieve sensitive and accurate quantitative results, experimental design and data analysis must consistently account for the variability of the quantified transitions. This consistency is especially important in large experiments, which increasingly require profiling up to hundreds of proteins over hundreds of samples. Here we describe a robust and automated workflow for the analysis of large quantitative SRM data sets that integrates data processing, statistical protein identification and quantification, and dissemination of the results. The integrated workflow combines three software tools: mProphet for peptide identification via probabilistic scoring; SRMstats for protein significance analysis with linear mixed-effect models; and PASSEL, a public repository for storage, retrieval and query of SRM data. The input requirements for the protocol are files with SRM traces in mzXML format, and a file with a list of transitions in a text tab-separated format. The protocol is especially suited for data with heavy isotope-labeled peptide internal standards. We demonstrate the protocol on a clinical data set in which the abundances of 35 biomarker candidates were profiled in 83 blood plasma samples of subjects with ovarian cancer or benign ovarian tumors. The time frame to realize the protocol is 1-2 weeks, depending on the number of replicates used in the experiment.

  17. The impact of using an intravenous workflow management system (IVWMS) on cost and patient safety.

    PubMed

    Lin, Alex C; Deng, Yihong; Thaibah, Hilal; Hingl, John; Penm, Jonathan; Ivey, Marianne F; Thomas, Mark

    2018-07-01

    The aim of this study was to determine the financial costs associated with wasted and missing doses before and after the implementation of an intravenous workflow management system (IVWMS) and to quantify the number and the rate of detected intravenous (IV) preparation errors. A retrospective analysis of the sample hospital information system database was conducted using three months of data before and after the implementation of an IVWMS System (DoseEdge ® ) which uses barcode scanning and photographic technologies to track and verify each step of the preparation process. The financial impact associated with wasted and missing >IV doses was determined by combining drug acquisition, labor, accessory, and disposal costs. The intercepted error reports and pharmacist detected error reports were drawn from the IVWMS to quantify the number of errors by defined error categories. The total number of IV doses prepared before and after the implementation of the IVWMS system were 110,963 and 101,765 doses, respectively. The adoption of the IVWMS significantly reduced the amount of wasted and missing IV doses by 14,176 and 2268 doses, respectively (p < 0.001). The overall cost savings of using the system was $144,019 over 3 months. The total number of errors detected was 1160 (1.14%) after using the IVWMS. The implementation of the IVWMS facilitated workflow changes that led to a positive impact on cost and patient safety. The implementation of the IVWMS increased patient safety by enforcing standard operating procedures and bar code verifications. Published by Elsevier B.V.

  18. MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory

    PubMed Central

    Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang

    2005-01-01

    MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721

  19. A Novel Workflow to Enrich and Isolate Patient-Matched EpCAMhigh and EpCAMlow/negative CTCs Enables the Comparative Characterization of the PIK3CA Status in Metastatic Breast Cancer

    PubMed Central

    Lampignano, Rita; Yang, Liwen; Neumann, Martin H. D.; Franken, André; Fehm, Tanja; Niederacher, Dieter; Neubauer, Hans

    2017-01-01

    Circulating tumor cells (CTCs), potential precursors of most epithelial solid tumors, are mainly enriched by epithelial cell adhesion molecule (EpCAM)-dependent technologies. Hence, these approaches may overlook mesenchymal CTCs, considered highly malignant. Our aim was to establish a workflow to enrich and isolate patient-matched EpCAMhigh and EpCAMlow/negative CTCs within the same blood samples, and to investigate the phosphatidylinositol 3-kinase catalytic subunit alpha (PIK3CA) mutational status within single CTCs. We sequentially processed metastatic breast cancer (MBC) blood samples via CellSearch® (EpCAM-based) and via Parsortix™ (size-based) systems. After enrichment, cells captured in Parsortix™ cassettes were stained in situ for nuclei, cytokeratins, EpCAM and CD45. Afterwards, sorted cells were isolated via CellCelector™ micromanipulator and their genomes were amplified. Lastly, PIK3CA mutational status was analyzed by combining an amplicon-based approach with Sanger sequencing. In 54% of patients′ blood samples both EpCAMhigh and EpCAMlow/negative cells were identified and successfully isolated. High genomic integrity was observed in 8% of amplified genomes of EpCAMlow/negative cells vs. 28% of EpCAMhigh cells suggesting an increased apoptosis in the first CTC-subpopulation. Furthermore, PIK3CA hotspot mutations were detected in both EpCAMhigh and EpCAMlow/negative CTCs. Our workflow is suitable for single CTC analysis, permitting—for the first time—assessment of the heterogeneity of PIK3CA mutational status within patient-matched EpCAMhigh and EpCAMlow/negative CTCs. PMID:28858218

  20. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    PubMed

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.

  1. Identifying Urinary and Serum Exosome Biomarkers for Radiation Exposure Using a Data Dependent Acquisition and SWATH-MS Combined Workflow.

    PubMed

    Kulkarni, Shilpa; Koller, Antonius; Mani, Kartik M; Wen, Ruofeng; Alfieri, Alan; Saha, Subhrajit; Wang, Jian; Patel, Purvi; Bandeira, Nuno; Guha, Chandan; Chen, Emily I

    2016-11-01

    Early and accurate assessment of radiation injury by radiation-responsive biomarkers is critical for triage and early intervention. Biofluids such as urine and serum are convenient for such analysis. Recent research has also suggested that exosomes are a reliable source of biomarkers in disease progression. In the present study, we analyzed total urine proteome and exosomes isolated from urine or serum for potential biomarkers of acute and persistent radiation injury in mice exposed to lethal whole body irradiation (WBI). For feasibility studies, the mice were irradiated at 10.4 Gy WBI, and urine and serum samples were collected 24 and 72 hours after irradiation. Exosomes were isolated and analyzed using liquid chromatography mass spectrometry/mass spectrometry-based workflow for radiation exposure signatures. A data dependent acquisition and SWATH-MS combined workflow approach was used to identify significantly exosome biomarkers indicative of acute or persistent radiation-induced responses. For the validation studies, mice were exposed to 3, 6, 8, or 10 Gy WBI, and samples were analyzed for comparison. A comparison between total urine proteomics and urine exosome proteomics demonstrated that exosome proteomic analysis was superior in identifying radiation signatures. Feasibility studies identified 23 biomarkers from urine and 24 biomarkers from serum exosomes after WBI. Urinary exosome signatures identified different physiological parameters than the ones obtained in serum exosomes. Exosome signatures from urine indicated injury to the liver, gastrointestinal, and genitourinary tracts. In contrast, serum showed vascular injuries and acute inflammation in response to radiation. Selected urinary exosomal biomarkers also showed changes at lower radiation doses in validation studies. Exosome proteomics revealed radiation- and time-dependent protein signatures after WBI. A total of 47 differentially secreted proteins were identified in urinary and serum exosomes. Together, these data showed the feasibility of defining biomarkers that could elucidate tissue-associated and systemic response caused by high-dose ionizing radiation. This is the first report using an exosome proteomics approach to identify radiation signatures. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. MEVA - An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices

    PubMed Central

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results. PMID:25915061

  3. Identifying Urinary and Serum Exosome Biomarkers for Radiation Exposure Using a Data Dependent Acquisition and SWATH-MS Combined Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulkarni, Shilpa; Koller, Antonius; Proteomics Shared Resource, Herbert Irving Comprehensive Cancer Center, New York, New York

    Purpose: Early and accurate assessment of radiation injury by radiation-responsive biomarkers is critical for triage and early intervention. Biofluids such as urine and serum are convenient for such analysis. Recent research has also suggested that exosomes are a reliable source of biomarkers in disease progression. In the present study, we analyzed total urine proteome and exosomes isolated from urine or serum for potential biomarkers of acute and persistent radiation injury in mice exposed to lethal whole body irradiation (WBI). Methods and Materials: For feasibility studies, the mice were irradiated at 10.4 Gy WBI, and urine and serum samples were collected 24more » and 72 hours after irradiation. Exosomes were isolated and analyzed using liquid chromatography mass spectrometry/mass spectrometry-based workflow for radiation exposure signatures. A data dependent acquisition and SWATH-MS combined workflow approach was used to identify significantly exosome biomarkers indicative of acute or persistent radiation-induced responses. For the validation studies, mice were exposed to 3, 6, 8, or 10 Gy WBI, and samples were analyzed for comparison. Results: A comparison between total urine proteomics and urine exosome proteomics demonstrated that exosome proteomic analysis was superior in identifying radiation signatures. Feasibility studies identified 23 biomarkers from urine and 24 biomarkers from serum exosomes after WBI. Urinary exosome signatures identified different physiological parameters than the ones obtained in serum exosomes. Exosome signatures from urine indicated injury to the liver, gastrointestinal, and genitourinary tracts. In contrast, serum showed vascular injuries and acute inflammation in response to radiation. Selected urinary exosomal biomarkers also showed changes at lower radiation doses in validation studies. Conclusions: Exosome proteomics revealed radiation- and time-dependent protein signatures after WBI. A total of 47 differentially secreted proteins were identified in urinary and serum exosomes. Together, these data showed the feasibility of defining biomarkers that could elucidate tissue-associated and systemic response caused by high-dose ionizing radiation. This is the first report using an exosome proteomics approach to identify radiation signatures.« less

  4. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  5. PGen: large-scale genomic variations analysis workflow and browser in SoyKB.

    PubMed

    Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti

    2016-10-06

    With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most efficient analysis of soybean data using thorough testing and validation. This research serves as an example of best practices for development of genomics data analysis workflows by integrating remote HPC resources and efficient data management with ease of use for biological users. PGen workflow can also be easily customized for analysis of data in other species.

  6. Distributed Data Integration Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchlow, T; Ludaescher, B; Vouk, M

    The Internet is becoming the preferred method for disseminating scientific data from a variety of disciplines. This can result in information overload on the part of the scientists, who are unable to query all of the relevant sources, even if they knew where to find them, what they contained, how to interact with them, and how to interpret the results. A related issue is keeping up with current trends in information technology often taxes the end-user's expertise and time. Thus instead of benefiting from this information rich environment, scientists become experts on a small number of sources and technologies, usemore » them almost exclusively, and develop a resistance to innovations that can enhance their productivity. Enabling information based scientific advances, in domains such as functional genomics, requires fully utilizing all available information and the latest technologies. In order to address this problem we are developing a end-user centric, domain-sensitive workflow-based infrastructure, shown in Figure 1, that will allow scientists to design complex scientific workflows that reflect the data manipulation required to perform their research without an undue burden. We are taking a three-tiered approach to designing this infrastructure utilizing (1) abstract workflow definition, construction, and automatic deployment, (2) complex agent-based workflow execution and (3) automatic wrapper generation. In order to construct a workflow, the scientist defines an abstract workflow (AWF) in terminology (semantics and context) that is familiar to him/her. This AWF includes all of the data transformations, selections, and analyses required by the scientist, but does not necessarily specify particular data sources. This abstract workflow is then compiled into an executable workflow (EWF, in our case XPDL) that is then evaluated and executed by the workflow engine. This EWF contains references to specific data source and interfaces capable of performing the desired actions. In order to provide access to the largest number of resources possible, our lowest level utilizes automatic wrapper generation techniques to create information and data wrappers capable of interacting with the complex interfaces typical in scientific analysis. The remainder of this document outlines our work in these three areas, the impact our work has made, and our plans for the future.« less

  7. Metadata Management on the SCEC PetaSHA Project: Helping Users Describe, Discover, Understand, and Use Simulation Data in a Large-Scale Scientific Collaboration

    NASA Astrophysics Data System (ADS)

    Okaya, D.; Deelman, E.; Maechling, P.; Wong-Barnum, M.; Jordan, T. H.; Meyers, D.

    2007-12-01

    Large scientific collaborations, such as the SCEC Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA) Project, involve interactions between many scientists who exchange ideas and research results. These groups must organize, manage, and make accessible their community materials of observational data, derivative (research) results, computational products, and community software. The integration of scientific workflows as a paradigm to solve complex computations provides advantages of efficiency, reliability, repeatability, choices, and ease of use. The underlying resource needed for a scientific workflow to function and create discoverable and exchangeable products is the construction, tracking, and preservation of metadata. In the scientific workflow environment there is a two-tier structure of metadata. Workflow-level metadata and provenance describe operational steps, identity of resources, execution status, and product locations and names. Domain-level metadata essentially define the scientific meaning of data, codes and products. To a large degree the metadata at these two levels are separate. However, between these two levels is a subset of metadata produced at one level but is needed by the other. This crossover metadata suggests that some commonality in metadata handling is needed. SCEC researchers are collaborating with computer scientists at SDSC, the USC Information Sciences Institute, and Carnegie Mellon Univ. in order to perform earthquake science using high-performance computational resources. A primary objective of the "PetaSHA" collaboration is to perform physics-based estimations of strong ground motion associated with real and hypothetical earthquakes located within Southern California. Construction of 3D earth models, earthquake representations, and numerical simulation of seismic waves are key components of these estimations. Scientific workflows are used to orchestrate the sequences of scientific tasks and to access distributed computational facilities such as the NSF TeraGrid. Different types of metadata are produced and captured within the scientific workflows. One workflow within PetaSHA ("Earthworks") performs a linear sequence of tasks with workflow and seismological metadata preserved. Downstream scientific codes ingest these metadata produced by upstream codes. The seismological metadata uses attribute-value pairing in plain text; an identified need is to use more advanced handling methods. Another workflow system within PetaSHA ("Cybershake") involves several complex workflows in order to perform statistical analysis of ground shaking due to thousands of hypothetical but plausible earthquakes. Metadata management has been challenging due to its construction around a number of legacy scientific codes. We describe difficulties arising in the scientific workflow due to the lack of this metadata and suggest corrective steps, which in some cases include the cultural shift of domain science programmers coding for metadata.

  8. The equivalency between logic Petri workflow nets and workflow nets.

    PubMed

    Wang, Jing; Yu, ShuXia; Du, YuYue

    2015-01-01

    Logic Petri nets (LPNs) can describe and analyze batch processing functions and passing value indeterminacy in cooperative systems. Logic Petri workflow nets (LPWNs) are proposed based on LPNs in this paper. Process mining is regarded as an important bridge between modeling and analysis of data mining and business process. Workflow nets (WF-nets) are the extension to Petri nets (PNs), and have successfully been used to process mining. Some shortcomings cannot be avoided in process mining, such as duplicate tasks, invisible tasks, and the noise of logs. The online shop in electronic commerce in this paper is modeled to prove the equivalence between LPWNs and WF-nets, and advantages of LPWNs are presented.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin

    The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less

  10. Ergonomic design for dental offices.

    PubMed

    Ahearn, David J; Sanders, Martha J; Turcotte, Claudia

    2010-01-01

    The increasing complexity of the dental office environment influences productivity and workflow for dental clinicians. Advances in technology, and with it the range of products needed to provide services, have led to sprawl in operatory setups and the potential for awkward postures for dental clinicians during the delivery of oral health services. Although ergonomics often addresses the prevention of musculoskeletal disorders for specific populations of workers, concepts of workflow and productivity are integral to improved practice in work environments. This article provides suggestions for improving workflow and productivity for dental clinicians. The article applies ergonomic principles to dental practice issues such as equipment and supply management, office design, and workflow management. Implications for improved ergonomic processes and future research are explored.

  11. The Equivalency between Logic Petri Workflow Nets and Workflow Nets

    PubMed Central

    Wang, Jing; Yu, ShuXia; Du, YuYue

    2015-01-01

    Logic Petri nets (LPNs) can describe and analyze batch processing functions and passing value indeterminacy in cooperative systems. Logic Petri workflow nets (LPWNs) are proposed based on LPNs in this paper. Process mining is regarded as an important bridge between modeling and analysis of data mining and business process. Workflow nets (WF-nets) are the extension to Petri nets (PNs), and have successfully been used to process mining. Some shortcomings cannot be avoided in process mining, such as duplicate tasks, invisible tasks, and the noise of logs. The online shop in electronic commerce in this paper is modeled to prove the equivalence between LPWNs and WF-nets, and advantages of LPWNs are presented. PMID:25821845

  12. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  13. Preparation and characterization of protein-loaded poly(epsilon-caprolactone) microparticles for oral vaccine delivery.

    PubMed

    Benoit, M A; Baras, B; Gillard, J

    1999-07-05

    This paper describes the conditions of preparation of poly(epsilon-caprolactone) (PCL) microparticles with a mean size between 5 and 10 microm, obtained by a double emulsion-solvent evaporation technique, suitable for oral vaccine delivery. Bovine serum albumin (BSA) was used as water-soluble model antigen for encapsulation. Different parameters influencing the microparticle size, the BSA loading and entrapment efficiency were investigated. Spherical, smooth and homogeneously distributed microparticles were produced with a BSA loading and entrapment efficiency reaching, respectively, 5% (w/w) and 30%. Polyacrylamide gel electrophoresis (PAGE) and isoelectric focusing (IEF) analyses of BSA released from these particles confirmed that the entrapped protein seemed to remain unaltered by the protein encapsulation process. Copyright.

  14. Hierarchical Neural Network (HNN) for Closed Loop Decision Making: Designing the Architecture of a Hierarchical Neural Network to Model Attention, Learning and Goal Oriented Behavior

    DTIC Science & Technology

    1990-12-01

    030(M aau fr e~ re u’. ~oil(eIOE, form a::o n lit Send c f"ent lt ar nq this Ourde" "tii tor ay otther a .e n Of p, amid". to W4Vsntinlln...etadnuaeters ief’ice. 0 i 0reor Iformat.a;tio n ax; d 1 21 ;eQo Q offait of IA4naqe-m.t and Sudget. P01osoer t m edltoru Prole (07044 ,l81.’Nairil m O C NMI. I...I Application of Neural Networks to Robotics I Ziaudin Ahmnad John Selizuky Allm Gun Dmeel University, Depwunent of Electrical miCapue Engineeing

  15. Fighting detection using interaction energy force

    NASA Astrophysics Data System (ADS)

    Wateosot, Chonthisa; Suvonvorn, Nikom

    2017-02-01

    Fighting detection is an important issue in security aimed to prevent criminal or undesirable events in public places. Many researches on computer vision techniques have studied to detect the specific event in crowded scenes. In this paper we focus on fighting detection using social-based Interaction Energy Force (IEF). The method uses low level features without object extraction and tracking. The interaction force is modeled using the magnitude and direction of optical flows. A fighting factor is developed under this model to detect fighting events using thresholding method. An energy map of interaction force is also presented to identify the corresponding events. The evaluation is performed using NUSHGA and BEHAVE datasets. The results show the efficiency with high accuracy regardless of various conditions.

  16. Improvement of the solubilization of proteins in two-dimensional electrophoresis with immobilized pH gradients

    PubMed Central

    Rabilloud, Thierry; Adessi, C.; Giraudel, A.; Lunardi, J.

    2007-01-01

    Summary We have carried out the separation of sparingly-soluble (membrane and nuclear) proteins by high resolution two-dimensional electrophoresis. IEF with immobilized pH gradients leads to severe quantitative losses of proteins in the resulting 2-D map, although the resolution is usually kept high. We therefore tried to improve the solubility of proteins in this technique, by using denaturing cocktails containing various detergents and chaotropes. Best results were obtained by using a denaturing solution containing urea, thiourea, and detergents (both nonionic and zwitterionic). The usefulness of thiourea-containing denaturing mixtures are shown in this article on several models including microsomal and nuclear proteins and on tubulin, a protein highly prone to aggregation. PMID:9150907

  17. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  18. Enabling Real-time Water Decision Support Services Using Model as a Service

    NASA Astrophysics Data System (ADS)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  19. Process-Structure Linkages Using a Data Science Approach: Application to Simulated Additive Manufacturing Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi

    A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less

  20. Characterizing Strain Variation in Engineered E. coli Using a Multi-Omics-Based Workflow

    DOE PAGES

    Brunk, Elizabeth; George, Kevin W.; Alonso-Gutierrez, Jorge; ...

    2016-05-19

    Understanding the complex interactions that occur between heterologous and native biochemical pathways represents a major challenge in metabolic engineering and synthetic biology. We present a workflow that integrates metabolomics, proteomics, and genome-scale models of Escherichia coli metabolism to study the effects of introducing a heterologous pathway into a microbial host. This workflow incorporates complementary approaches from computational systems biology, metabolic engineering, and synthetic biology; provides molecular insight into how the host organism microenvironment changes due to pathway engineering; and demonstrates how biological mechanisms underlying strain variation can be exploited as an engineering strategy to increase product yield. As a proofmore » of concept, we present the analysis of eight engineered strains producing three biofuels: isopentenol, limonene, and bisabolene. Application of this workflow identified the roles of candidate genes, pathways, and biochemical reactions in observed experimental phenomena and facilitated the construction of a mutant strain with improved productivity. The contributed workflow is available as an open-source tool in the form of iPython notebooks.« less

Top