Huser, Vojtech; Rasmussen, Luke V; Oberg, Ryan; Starren, Justin B
2011-04-10
Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform.
DEWEY: the DICOM-enabled workflow engine system.
Erickson, Bradley J; Langer, Steve G; Blezek, Daniel J; Ryan, William J; French, Todd L
2014-06-01
Workflow is a widely used term to describe the sequence of steps to accomplish a task. The use of workflow technology in medicine and medical imaging in particular is limited. In this article, we describe the application of a workflow engine to improve workflow in a radiology department. We implemented a DICOM-enabled workflow engine system in our department. We designed it in a way to allow for scalability, reliability, and flexibility. We implemented several workflows, including one that replaced an existing manual workflow and measured the number of examinations prepared in time without and with the workflow system. The system significantly increased the number of examinations prepared in time for clinical review compared to human effort. It also met the design goals defined at its outset. Workflow engines appear to have value as ways to efficiently assure that complex workflows are completed in a timely fashion.
2011-01-01
Background Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. Results We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. Conclusions We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform. PMID:21477364
Design and implementation of workflow engine for service-oriented architecture
NASA Astrophysics Data System (ADS)
Peng, Shuqing; Duan, Huining; Chen, Deyun
2009-04-01
As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.
Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P
2015-01-01
As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology.
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-05-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows (Invited)
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-12-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues’ expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable “software appliance” to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish “talkoot” (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a “science story” in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using “service casts” and “interest casts” (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH’s Mining Workflow Composer and the open-source Active BPEL engine, and JPL’s SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the “sociological” problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
[Development of a medical equipment support information system based on PDF portable document].
Cheng, Jiangbo; Wang, Weidong
2010-07-01
According to the organizational structure and management system of the hospital medical engineering support, integrate medical engineering support workflow to ensure the medical engineering data effectively, accurately and comprehensively collected and kept in electronic archives. Analyse workflow of the medical, equipment support work and record all work processes by the portable electronic document. Using XML middleware technology and SQL Server database, complete process management, data calculation, submission, storage and other functions. The practical application shows that the medical equipment support information system optimizes the existing work process, standardized and digital, automatic and efficient orderly and controllable. The medical equipment support information system based on portable electronic document can effectively optimize and improve hospital medical engineering support work, improve performance, reduce costs, and provide full and accurate digital data
Game engines and immersive displays
NASA Astrophysics Data System (ADS)
Chang, Benjamin; Destefano, Marc
2014-02-01
While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.
ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows
NASA Technical Reports Server (NTRS)
McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush
2004-01-01
With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.
NASA Technology Transfer System
NASA Technical Reports Server (NTRS)
Tran, Peter B.; Okimura, Takeshi
2017-01-01
NTTS is the IT infrastructure for the Agency's Technology Transfer (T2) program containing 60,000+ technology portfolio supporting all ten NASA field centers and HQ. It is the enterprise IT system for facilitating the Agency's technology transfer process, which includes reporting of new technologies (e.g., technology invention disclosures NF1679), protecting intellectual properties (e.g., patents), and commercializing technologies through various technology licenses, software releases, spinoffs, and success stories using custom built workflow, reporting, data consolidation, integration, and search engines.
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
Concise Review: Organ Engineering: Design, Technology, and Integration.
Kaushik, Gaurav; Leijten, Jeroen; Khademhosseini, Ali
2017-01-01
Engineering complex tissues and whole organs has the potential to dramatically impact translational medicine in several avenues. Organ engineering is a discipline that integrates biological knowledge of embryological development, anatomy, physiology, and cellular interactions with enabling technologies including biocompatible biomaterials and biofabrication platforms such as three-dimensional bioprinting. When engineering complex tissues and organs, core design principles must be taken into account, such as the structure-function relationship, biochemical signaling, mechanics, gradients, and spatial constraints. Technological advances in biomaterials, biofabrication, and biomedical imaging allow for in vitro control of these factors to recreate in vivo phenomena. Finally, organ engineering emerges as an integration of biological design and technical rigor. An overall workflow for organ engineering and guiding technology to advance biology as well as a perspective on necessary future iterations in the field is discussed. Stem Cells 2017;35:51-60. © 2016 AlphaMed Press.
Workflow management systems in radiology
NASA Astrophysics Data System (ADS)
Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim
1998-07-01
In a situation of shrinking health care budgets, increasing cost pressure and growing demands to increase the efficiency and the quality of medical services, health care enterprises are forced to optimize or complete re-design their processes. Although information technology is agreed to potentially contribute to cost reduction and efficiency improvement, the real success factors are the re-definition and automation of processes: Business Process Re-engineering and Workflow Management. In this paper we discuss architectures for the use of workflow management systems in radiology. We propose to move forward from information systems in radiology (RIS, PACS) to Radiology Management Systems, in which workflow functionality (process definitions and process automation) is implemented through autonomous workflow management systems (WfMS). In a workflow oriented architecture, an autonomous workflow enactment service communicates with workflow client applications via standardized interfaces. In this paper, we discuss the need for and the benefits of such an approach. The separation of workflow management system and application systems is emphasized, and the consequences that arise for the architecture of workflow oriented information systems. This includes an appropriate workflow terminology, and the definition of standard interfaces for workflow aware application systems. Workflow studies in various institutions have shown that most of the processes in radiology are well structured and suited for a workflow management approach. Numerous commercially available Workflow Management Systems (WfMS) were investigated, and some of them, which are process- oriented and application independent, appear suitable for use in radiology.
Reichelt, Wieland N; Haas, Florian; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
Microbial bioprocesses need to be designed to be transferable from lab scale to production scale as well as between setups. Although substantial effort is invested to control technological parameters, usually the only true constant parameter is the actual producer of the product: the cell. Hence, instead of solely controlling technological process parameters, the focus should be increasingly laid on physiological parameters. This contribution aims at illustrating a workflow of data life cycle management with special focus on physiology. Information processing condenses the data into physiological variables, while information mining condenses the variables further into physiological descriptors. This basis facilitates data analysis for a physiological explanation for observed phenomena in productivity. Targeting transferability, we demonstrate this workflow using an industrially relevant Escherichia coli process for recombinant protein production and substantiate the following three points: (1) The postinduction phase is independent in terms of productivity and physiology from the preinduction variables specific growth rate and biomass at induction. (2) The specific substrate uptake rate during induction phase was found to significantly impact the maximum specific product titer. (3) The time point of maximum specific titer can be predicted by an easy accessible physiological variable: while the maximum specific titers were reached at different time points (19.8 ± 7.6 h), those maxima were reached all within a very narrow window of cumulatively consumed substrate dSn (3.1 ± 0.3 g/g). Concluding, this contribution provides a workflow on how to gain a physiological view on the process and illustrates potential benefits. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:261-270, 2017. © 2016 American Institute of Chemical Engineers.
2017-01-31
mapping critical business workflows and then optimizing them with appropriate evolutionary technology choices is often called “ Product Line Architecture... technologies , products , services, and processes, and the USG evaluates them against its 360o requirements objectives, and refines them as appropriate, clarity...in rapidly evolving technological domains (e.g. by applying best commercial practices for open standard product line architecture.) An MP might be
Hydrography for the non-Hydrographer: A Paradigm shift in Data Processing
NASA Astrophysics Data System (ADS)
Malzone, C.; Bruce, S.
2017-12-01
Advancements in technology have led to overall systematic improvements including; hardware design, software architecture, data transmission/ telepresence. Historically, utilization of this technology has required a high knowledge level obtained with many years of experience, training and/or education. High training costs are incurred to achieve and maintain an acceptable level proficiency within an organization. Recently, engineers have developed off-the-shelf software technology called Qimera that has simplified the processing of hydrographic data. The core technology is centered around the isolation of tasks within the work- flow to capitalize on the technological advances in computing technology to automate the mundane error prone tasks to bring more value to the stages in which the human brain brings value. Key design features include: guided workflow, transcription automation, processing state management, real-time QA, dynamic workflow for validation, collaborative cleaning and production line processing. Since, Qimera is designed to guide the user, it allows expedition leaders to focus on science while providing an educational opportunity for students to quickly learn the hydrographic processing workflow including ancillary data analysis, trouble-shooting, calibration and cleaning. This paper provides case studies on how Qimera is currently implemented in scientific expeditions, benefits of implementation and how it is directing the future of on-board research for the non-hydrographer.
SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog
NASA Astrophysics Data System (ADS)
Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely
2014-05-01
Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.
Optimizing high performance computing workflow for protein functional annotation.
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-09-10
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.
Optimizing high performance computing workflow for protein functional annotation
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-01-01
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296
Parametric Workflow (BIM) for the Repair Construction of Traditional Historic Architecture in Taiwan
NASA Astrophysics Data System (ADS)
Ma, Y.-P.; Hsu, C. C.; Lin, M.-C.; Tsai, Z.-W.; Chen, J.-Y.
2015-08-01
In Taiwan, numerous existing traditional buildings are constructed with wooden structures, brick structures, and stone structures. This paper will focus on the Taiwan traditional historic architecture and target the traditional wooden structure buildings as the design proposition and process the BIM workflow for modeling complex wooden combination geometry, integrating with more traditional 2D documents and for visualizing repair construction assumptions within the 3D model representation. The goal of this article is to explore the current problems to overcome in wooden historic building conservation, and introduce the BIM technology in the case of conserving, documenting, managing, and creating full engineering drawings and information for effectively support historic conservation. Although BIM is mostly oriented to current construction praxis, there have been some attempts to investigate its applicability in historic conservation projects. This article also illustrates the importance and advantages of using BIM workflow in repair construction process, when comparing with generic workflow.
2005-06-01
cognitive task analysis , organizational information dissemination and interaction, systems engineering, collaboration and communications processes, decision-making processes, and data collection and organization. By blending these diverse disciplines command centers can be designed to support decision-making, cognitive analysis, information technology, and the human factors engineering aspects of Command and Control (C2). This model can then be used as a baseline when dealing with work in areas of business processes, workflow engineering, information management,
Distributed Data Integration Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Critchlow, T; Ludaescher, B; Vouk, M
The Internet is becoming the preferred method for disseminating scientific data from a variety of disciplines. This can result in information overload on the part of the scientists, who are unable to query all of the relevant sources, even if they knew where to find them, what they contained, how to interact with them, and how to interpret the results. A related issue is keeping up with current trends in information technology often taxes the end-user's expertise and time. Thus instead of benefiting from this information rich environment, scientists become experts on a small number of sources and technologies, usemore » them almost exclusively, and develop a resistance to innovations that can enhance their productivity. Enabling information based scientific advances, in domains such as functional genomics, requires fully utilizing all available information and the latest technologies. In order to address this problem we are developing a end-user centric, domain-sensitive workflow-based infrastructure, shown in Figure 1, that will allow scientists to design complex scientific workflows that reflect the data manipulation required to perform their research without an undue burden. We are taking a three-tiered approach to designing this infrastructure utilizing (1) abstract workflow definition, construction, and automatic deployment, (2) complex agent-based workflow execution and (3) automatic wrapper generation. In order to construct a workflow, the scientist defines an abstract workflow (AWF) in terminology (semantics and context) that is familiar to him/her. This AWF includes all of the data transformations, selections, and analyses required by the scientist, but does not necessarily specify particular data sources. This abstract workflow is then compiled into an executable workflow (EWF, in our case XPDL) that is then evaluated and executed by the workflow engine. This EWF contains references to specific data source and interfaces capable of performing the desired actions. In order to provide access to the largest number of resources possible, our lowest level utilizes automatic wrapper generation techniques to create information and data wrappers capable of interacting with the complex interfaces typical in scientific analysis. The remainder of this document outlines our work in these three areas, the impact our work has made, and our plans for the future.« less
BPELPower—A BPEL execution engine for geospatial web services
NASA Astrophysics Data System (ADS)
Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi
2012-10-01
The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.
Characterizing Strain Variation in Engineered E. coli Using a Multi-Omics-Based Workflow
Brunk, Elizabeth; George, Kevin W.; Alonso-Gutierrez, Jorge; ...
2016-05-19
Understanding the complex interactions that occur between heterologous and native biochemical pathways represents a major challenge in metabolic engineering and synthetic biology. We present a workflow that integrates metabolomics, proteomics, and genome-scale models of Escherichia coli metabolism to study the effects of introducing a heterologous pathway into a microbial host. This workflow incorporates complementary approaches from computational systems biology, metabolic engineering, and synthetic biology; provides molecular insight into how the host organism microenvironment changes due to pathway engineering; and demonstrates how biological mechanisms underlying strain variation can be exploited as an engineering strategy to increase product yield. As a proofmore » of concept, we present the analysis of eight engineered strains producing three biofuels: isopentenol, limonene, and bisabolene. Application of this workflow identified the roles of candidate genes, pathways, and biochemical reactions in observed experimental phenomena and facilitated the construction of a mutant strain with improved productivity. The contributed workflow is available as an open-source tool in the form of iPython notebooks.« less
Proteomics in the genome engineering era.
Vandemoortele, Giel; Gevaert, Kris; Eyckerman, Sven
2016-01-01
Genome engineering experiments used to be lengthy, inefficient, and often expensive, preventing a widespread adoption of such experiments for the full assessment of endogenous protein functions. With the revolutionary clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 technology, genome engineering became accessible to the broad life sciences community and is now implemented in several research areas. One particular field that can benefit significantly from this evolution is proteomics where a substantial impact on experimental design and general proteome biology can be expected. In this review, we describe the main applications of genome engineering in proteomics, including the use of engineered disease models and endogenous epitope tagging. In addition, we provide an overview on current literature and highlight important considerations when launching genome engineering technologies in proteomics workflows. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz
2016-01-01
As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971
Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz
2017-01-01
As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Martinez, Clarisa; Wang, Jing; Liu, Ye; Liu, Brent
2014-03-01
Clinical trials usually have a demand to collect, track and analyze multimedia data according to the workflow. Currently, the clinical trial data management requirements are normally addressed with custom-built systems. Challenges occur in the workflow design within different trials. The traditional pre-defined custom-built system is usually limited to a specific clinical trial and normally requires time-consuming and resource-intensive software development. To provide a solution, we present a user customizable imaging informatics-based intelligent workflow engine system for managing stroke rehabilitation clinical trials with intelligent workflow. The intelligent workflow engine provides flexibility in building and tailoring the workflow in various stages of clinical trials. By providing a solution to tailor and automate the workflow, the system will save time and reduce errors for clinical trials. Although our system is designed for clinical trials for rehabilitation, it may be extended to other imaging based clinical trials as well.
Quality Assurance Program Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halford, Vaughn Edward; Ryder, Ann Marie
Effective May 1, 2017, led by a new executive leadership team, Sandia began operating within a new organizational structure. National Technology and Engineering Solutions of Sandia (Sandia’s) Quality Assurance Program (QAP) was established to assign responsibilities and authorities, define workflow policies and requirements, and provide for the performance and assessment of work.
Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.
Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S
The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.
From the desktop to the grid: scalable bioinformatics via workflow conversion.
de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver
2016-03-12
Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.
Engineering Documentation and Data Control
NASA Technical Reports Server (NTRS)
Matteson, Michael J.; Bramley, Craig; Ciaruffoli, Veronica
2001-01-01
Mississippi Space Services (MSS) the facility services contractor for NASA's John C. Stennis Space Center (SSC), is utilizing technology to improve engineering documentation and data control. Two identified improvement areas, labor intensive documentation research and outdated drafting standards, were targeted as top priority. MSS selected AutoManager(R) WorkFlow from Cyco software to manage engineering documentation. The software is currently installed on over 150 desctops. The outdated SSC drafting standard was written for pre-CADD drafting methods, in other words, board drafting. Implementation of COTS software solutions to manage engineering documentation and update the drafting standard resulted in significant increases in productivity by reducing the time spent searching for documents.
A web-based rapid assessment tool for production publishing solutions
NASA Astrophysics Data System (ADS)
Sun, Tong
2010-02-01
Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.
Krujatz, Felix; Lode, Anja; Seidel, Julia; Bley, Thomas; Gelinsky, Michael; Steingroewer, Juliane
2017-10-25
The diversity and complexity of biotechnological applications are constantly increasing, with ever expanding ranges of production hosts, cultivation conditions and measurement tasks. Consequently, many analytical and cultivation systems for biotechnology and bioprocess engineering, such as microfluidic devices or bioreactors, are tailor-made to precisely satisfy the requirements of specific measurements or cultivation tasks. Additive manufacturing (AM) technologies offer the possibility of fabricating tailor-made 3D laboratory equipment directly from CAD designs with previously inaccessible levels of freedom in terms of structural complexity. This review discusses the historical background of these technologies, their most promising current implementations and the associated workflows, fabrication processes and material specifications, together with some of the major challenges associated with using AM in biotechnology/bioprocess engineering. To illustrate the great potential of AM, selected examples in microfluidic devices, 3D-bioprinting/biofabrication and bioprocess engineering are highlighted. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hwang, Darryl H.; Ma, Kevin; Yepes, Fernando; Nadamuni, Mridula; Nayyar, Megha; Liu, Brent; Duddalwar, Vinay; Lepore, Natasha
2015-12-01
A conventional radiology report primarily consists of a large amount of unstructured text, and lacks clear, concise, consistent and content-rich information. Hence, an area of unmet clinical need consists of developing better ways to communicate radiology findings and information specific to each patient. Here, we design a new workflow and reporting system that combines and integrates advances in engineering technology with those from the medical sciences, the Multidimensional Interactive Radiology Report and Analysis (MIRRA). Until recently, clinical standards have primarily relied on 2D images for the purpose of measurement, but with the advent of 3D processing, many of the manually measured metrics can be automated, leading to better reproducibility and less subjective measurement placement. Hence, we make use this newly available 3D processing in our workflow. Our pipeline is used here to standardize the labeling, tracking, and quantifying of metrics for renal masses.
Using AI and Semantic Web Technologies to attack Process Complexity in Open Systems
NASA Astrophysics Data System (ADS)
Thompson, Simon; Giles, Nick; Li, Yang; Gharib, Hamid; Nguyen, Thuc Duong
Recently many vendors and groups have advocated using BPEL and WS-BPEL as a workflow language to encapsulate business logic. While encapsulating workflow and process logic in one place is a sensible architectural decision the implementation of complex workflows suffers from the same problems that made managing and maintaining hierarchical procedural programs difficult. BPEL lacks constructs for logical modularity such as the requirements construct from the STL [12] or the ability to adapt constructs like pure abstract classes for the same purpose. We describe a system that uses semantic web and agent concepts to implement an abstraction layer for BPEL based on the notion of Goals and service typing. AI planning was used to enable process engineers to create and validate systems that used services and goals as first class concepts and compiled processes at run time for execution.
Provenance-Powered Automatic Workflow Generation and Composition
NASA Astrophysics Data System (ADS)
Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.
2015-12-01
In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.
NASA Astrophysics Data System (ADS)
Pan, Tianheng
2018-01-01
In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
Point-of-care technology: integration for improved delivery of care.
Gregory, Debbie; Buckner, Martha
2014-01-01
The growing complexity of technology, equipment, and devices involved in patient care delivery can be staggering and overwhelming. Technology is intended to be a tool to help clinicians, but it can also be a frustrating hindrance if not thoughtfully planned and strategically aligned. Critical care nurses are key partners in the collaborations needed to improve safety and quality through health information technology (IT). Nurses must advocate for systems that are interoperable and adapted to the context of care experiences. The involvement and collaboration between clinicians, information technology specialists, biomedical engineers, and vendors has never been more relevant and applicable. Working together strategically with a shared vision can effectively provide a seamless clinical workflow, maximize technology investments, and ultimately improve patient care delivery and outcomes. Developing a strategic integrated clinical and IT roadmap is a critical component of today's health care environment. How can technology strategy be aligned from the executive suite to the bedside caregiver? What is the model for using clinical workflows to drive technology adoption? How can the voice of the critical care nurse strengthen this process? How can success be assured from the initial assessment and selection of technology to a sustainable support model? What is the vendor's role as a strategic partner and "co-caregiver"?
Provenance for Runtime Workflow Steering and Validation in Computational Seismology
NASA Astrophysics Data System (ADS)
Spinuso, A.; Krischer, L.; Krause, A.; Filgueira, R.; Magnoni, F.; Muraleedharan, V.; David, M.
2014-12-01
Provenance systems may be offered by modern workflow engines to collect metadata about the data transformations at runtime. If combined with effective visualisation and monitoring interfaces, these provenance recordings can speed up the validation process of an experiment, suggesting interactive or automated interventions with immediate effects on the lifecycle of a workflow run. For instance, in the field of computational seismology, if we consider research applications performing long lasting cross correlation analysis and high resolution simulations, the immediate notification of logical errors and the rapid access to intermediate results, can produce reactions which foster a more efficient progress of the research. These applications are often executed in secured and sophisticated HPC and HTC infrastructures, highlighting the need for a comprehensive framework that facilitates the extraction of fine grained provenance and the development of provenance aware components, leveraging the scalability characteristics of the adopted workflow engines, whose enactment can be mapped to different technologies (MPI, Storm clusters, etc). This work looks at the adoption of W3C-PROV concepts and data model within a user driven processing and validation framework for seismic data, supporting also computational and data management steering. Validation needs to balance automation with user intervention, considering the scientist as part of the archiving process. Therefore, the provenance data is enriched with community-specific metadata vocabularies and control messages, making an experiment reproducible and its description consistent with the community understandings. Moreover, it can contain user defined terms and annotations. The current implementation of the system is supported by the EU-Funded VERCE (http://verce.eu). It provides, as well as the provenance generation mechanisms, a prototypal browser-based user interface and a web API built on top of a NoSQL storage technology, experimenting ways to ensure a rapid and flexible access to the lineage traces. It supports the users with the visualisation of graphical products and offers combined operations to access and download the data which may be selectively stored at runtime, into dedicated data archives.
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
Patterson, Emily S
2018-05-01
Objective To integrate and synthesize insights from recent studies of workarounds to the intended use of health information technology (HIT) by health care professionals. Background Systems are safest when the documentation of how work is done in policies and procedures closely matches what people actually do when they are working. Proactively identifying and managing workarounds to the intended use of technology, including deviations from expected workflows, can improve system safety. Method A narrative review of studies of workarounds with HIT was conducted to identify themes in the literature. Results Three themes were identified: (1) Users circumvented new additional steps in the workflow when using HIT, (2) interdisciplinary team members communicated via HIT in text fields that were intended for other purposes, and (3) locally developed paper-based and manual whiteboard systems were used instead of HIT to support situation awareness of individuals and groups; an example of a locally developed system was handwritten notes about a patient on a piece of paper folded up and carried in a nurse's pocket. Conclusion Workarounds were employed to avoid changes to workflow, enable interdisciplinary communication, coordinate activities, and have real-time portable access to summarized and synthesized information. Application Implications for practice include providing summary overview displays, explicitly supporting role-based communication and coordination through HIT, and reducing the risk to reputation due to electronic monitoring of individual performance.
Globus | Informatics Technology for Cancer Research (ITCR)
Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...
2016-10-06
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Moenninghoff, Christoph; Umutlu, Lale; Kloeters, Christian; Ringelstein, Adrian; Ladd, Mark E; Sombetzki, Antje; Lauenstein, Thomas C; Forsting, Michael; Schlamann, Marc
2013-06-01
Workflow efficiency and workload of radiological technologists (RTs) were compared in head examinations performed with two 1.5 T magnetic resonance (MR) scanners equipped with or without an automated user interface called "day optimizing throughput" (Dot) workflow engine. Thirty-four patients with known intracranial pathology were examined with a 1.5 T MR scanner with Dot workflow engine (Siemens MAGNETOM Aera) and with a 1.5 T MR scanner with conventional user interface (Siemens MAGNETOM Avanto) using four standardized examination protocols. The elapsed time for all necessary work steps, which were performed by 11 RTs within the total examination time, was compared for each examination at both MR scanners. The RTs evaluated the user-friendliness of both scanners by a questionnaire. Normality of distribution was checked for all continuous variables by use of the Shapiro-Wilk test. Normally distributed variables were analyzed by Student's paired t-test, otherwise Wilcoxon signed-rank test was used to compare means. Total examination time of MR examinations performed with Dot engine was reduced from 24:53 to 20:01 minutes (P < .001) and the necessary RT intervention decreased by 61% (P < .001). The Dot engine's automated choice of MR protocols was significantly better assessed by the RTs than the conventional user interface (P = .001). According to this preliminary study, the Dot workflow engine is a time-saving user assistance software, which decreases the RTs' effort significantly and may help to automate neuroradiological examinations for a higher workflow efficiency. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
Shabo, Amnon; Peleg, Mor; Parimbelli, Enea; Quaglini, Silvana; Napolitano, Carlo
2016-12-07
Implementing a decision-support system within a healthcare organization requires integration of clinical domain knowledge with resource constraints. Computer-interpretable guidelines (CIG) are excellent instruments for addressing clinical aspects while business process management (BPM) languages and Workflow (Wf) engines manage the logistic organizational constraints. Our objective is the orchestration of all the relevant factors needed for a successful execution of patient's care pathways, especially when spanning the continuum of care, from acute to community or home care. We considered three strategies for integrating CIGs with organizational workflows: extending the CIG or BPM languages and their engines, or creating an interplay between them. We used the interplay approach to implement a set of use cases arising from a CIG implementation in the domain of Atrial Fibrillation. To provide a more scalable and standards-based solution, we explored the use of Cross-Enterprise Document Workflow Integration Profile. We describe our proof-of-concept implementation of five use cases. We utilized the Personal Health Record of the MobiGuide project to implement a loosely-coupled approach between the Activiti BPM engine and the Picard CIG engine. Changes in the PHR were detected by polling. IHE profiles were used to develop workflow documents that orchestrate cross-enterprise execution of cardioversion. Interplay between CIG and BPM engines can support orchestration of care flows within organizational settings.
Workflow technology: the new frontier. How to overcome the barriers and join the future.
Shefter, Susan M
2006-01-01
Hospitals are catching up to the business world in the introduction of technology systems that support professional practice and workflow. The field of case management is highly complex and interrelates with diverse groups in diverse locations. The last few years have seen the introduction of Workflow Technology Tools, which can improve the quality and efficiency of discharge planning by the case manager. Despite the availability of these wonderful new programs, many case managers are hesitant to adopt the new technology and workflow. For a myriad of reasons, a computer-based workflow system can seem like a brick wall. This article discusses, from a practitioner's point of view, how professionals can gain confidence and skill to get around the brick wall and join the future.
Supply chain optimization at an academic medical center.
Labuhn, Jonathan; Almeter, Philip; McLaughlin, Christopher; Fields, Philip; Turner, Benjamin
2017-08-01
A successful supply chain optimization project that leveraged technology, engineering principles, and a technician workflow redesign in the setting of a growing health system is described. With continued rises in medication costs, medication inventory management is increasingly important. Proper management of central pharmacy inventory and floor-stock inventory in automated dispensing cabinets (ADCs) can be challenging. In an effort to improve control of inventory costs in the central pharmacy of a large academic medical center, the pharmacy department implemented a supply chain optimization project in collaboration with the medical center's inhouse team of experts on process improvement and industrial engineering. The project had 2 main components: (1) upgrading and reconfiguring carousel technology within an expanded central pharmacy footprint to generate accurate floor-stock inventory replenishment reports, which resulted in efficiencies within the medication-use system, and (2) implementing a technician workflow redesign and algorithm to right-size the ADC inventory, which decreased inventory stockouts (i.e., incidents of depletion of medication stock) and improved ADC user satisfaction. Through a multifaceted approach to inventory management, the number of stockouts per month was decreased and ADC inventory was optimized, resulting in a one-time inventory cost savings of $220,500. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Computer-aided dental prostheses construction using reverse engineering.
Solaberrieta, E; Minguez, R; Barrenetxea, L; Sierra, E; Etxaniz, O
2014-01-01
The implementation of computer-aided design/computer-aided manufacturing (CAD/CAM) systems with virtual articulators, which take into account the kinematics, constitutes a breakthrough in the construction of customised dental prostheses. This paper presents a multidisciplinary protocol involving CAM techniques to produce dental prostheses. This protocol includes a step-by-step procedure using innovative reverse engineering technologies to transform completely virtual design processes into customised prostheses. A special emphasis is placed on a novel method that permits a virtual location of the models. The complete workflow includes the optical scanning of the patient, the use of reverse engineering software and, if necessary, the use of rapid prototyping to produce CAD temporary prostheses.
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
NASA Astrophysics Data System (ADS)
McCarthy, Ann
2006-01-01
The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.
Task-technology fit of video telehealth for nurses in an outpatient clinic setting.
Cady, Rhonda G; Finkelstein, Stanley M
2014-07-01
Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task-technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task-technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time-motion study. Qualitative and quantitative results were merged and analyzed within the task-technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task-technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Telehealth must provide the right information to the right clinician at the right time. Evaluating task-technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology.
Generic worklist handler for workflow-enabled products
NASA Astrophysics Data System (ADS)
Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas
1999-07-01
Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.
Military Interoperable Digital Hospital Testbed (MIDHT)
2015-12-01
and Analyze the resulting technological impact on medication errors, pharmacists ’ productivity, nurse satisfactions/workflow and patient...medication errors, pharmacists productivity, nurse satisfactions/workflow and patient satisfaction. 1.1.1 Pharmacy Robotics Implementation...1.2 Research and analyze the resulting technological impact on medication errors, pharmacist productivity, nurse satisfaction/workflow and patient
Workflow based framework for life science informatics.
Tiwari, Abhishek; Sekhar, Arvind K T
2007-10-01
Workflow technology is a generic mechanism to integrate diverse types of available resources (databases, servers, software applications and different services) which facilitate knowledge exchange within traditionally divergent fields such as molecular biology, clinical research, computational science, physics, chemistry and statistics. Researchers can easily incorporate and access diverse, distributed tools and data to develop their own research protocols for scientific analysis. Application of workflow technology has been reported in areas like drug discovery, genomics, large-scale gene expression analysis, proteomics, and system biology. In this article, we have discussed the existing workflow systems and the trends in applications of workflow based systems.
NASA Technical Reports Server (NTRS)
Jones, Corey; Kapatos, Dennis; Skradski, Cory
2012-01-01
Do you have workflows with many manual tasks that slow down your business? Or, do you scale back workflows because there are simply too many manual tasks? Basic workflow robots can automate some common tasks, but not everything. This presentation will show how advanced robots called "expression robots" can be set up to perform everything from simple tasks such as: moving, creating folders, renaming, changing or creating an attribute, and revising, to more complex tasks like: creating a pdf, or even launching a session of Creo Parametric and performing a specific modeling task. Expression robots are able to utilize the Java API and Info*Engine to do almost anything you can imagine! Best of all, these tools are supported by PTC and will work with later releases of Windchill. Limited knowledge of Java, Info*Engine, and XML are required. The attendee will learn what task expression robots are capable of performing. The attendee will learn what is involved in setting up an expression robot. The attendee will gain a basic understanding of simple Info*Engine tasks
Task–Technology Fit of Video Telehealth for Nurses in an Outpatient Clinic Setting
Finkelstein, Stanley M.
2014-01-01
Abstract Background: Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task–technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task–technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. Materials and Methods: The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time–motion study. Qualitative and quantitative results were merged and analyzed within the task–technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Results: Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task–technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Conclusions: Telehealth must provide the right information to the right clinician at the right time. Evaluating task–technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology. PMID:24841219
Scientific Data Management (SDM) Center for Enabling Technologies. Final Report, 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Our contributions to advancing the State of the Art in scientific workflows have focused on the following areas: Workflow development; Generic workflow components and templates; Provenance collection and analysis; and, Workflow reliability and fault tolerance.
Case study: impact of technology investment on lead discovery at Bristol-Myers Squibb, 1998-2006.
Houston, John G; Banks, Martyn N; Binnie, Alastair; Brenner, Stephen; O'Connell, Jonathan; Petrillo, Edward W
2008-01-01
We review strategic approaches taken over an eight-year period at BMS to implement new high-throughput approaches to lead discovery. Investments in compound management infrastructure and chemistry library production capability allowed significant growth in the size, diversity and quality of the BMS compound collection. Screening platforms were upgraded with robust automated technology to support miniaturized assay formats, while workflows and information handling technologies were streamlined for improved performance. These technology changes drove the need for a supporting organization in which critical engineering, informatics and scientific skills were more strongly represented. Taken together, these investments led to significant improvements in speed and productivity as well a greater impact of screening campaigns on the initiation of new drug discovery programs.
Wang, Ximing; Liu, Brent J; Martinez, Clarisa; Zhang, Xuejun; Winstein, Carolee J
2015-01-01
Imaging based clinical trials can benefit from a solution to efficiently collect, analyze, and distribute multimedia data at various stages within the workflow. Currently, the data management needs of these trials are typically addressed with custom-built systems. However, software development of the custom- built systems for versatile workflows can be resource-consuming. To address these challenges, we present a system with a workflow engine for imaging based clinical trials. The system enables a project coordinator to build a data collection and management system specifically related to study protocol workflow without programming. Web Access to DICOM Objects (WADO) module with novel features is integrated to further facilitate imaging related study. The system was initially evaluated by an imaging based rehabilitation clinical trial. The evaluation shows that the cost of the development of system can be much reduced compared to the custom-built system. By providing a solution to customize a system and automate the workflow, the system will save on development time and reduce errors especially for imaging clinical trials. PMID:25870169
An Auto-management Thesis Program WebMIS Based on Workflow
NASA Astrophysics Data System (ADS)
Chang, Li; Jie, Shi; Weibo, Zhong
An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
IQ-Station: A Low Cost Portable Immersive Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric Whiting; Patrick O'Leary; William Sherman
2010-11-01
The emergence of inexpensive 3D TV’s, affordable input and rendering hardware and open-source software has created a yeasty atmosphere for the development of low-cost immersive environments (IE). A low cost IE system, or IQ-station, fashioned from commercial off the shelf technology (COTS), coupled with a targeted immersive application can be a viable laboratory instrument for enhancing scientific workflow for exploration and analysis. The use of an IQ-station in a laboratory setting also has the potential of quickening the adoption of a more sophisticated immersive environment as a critical enabler in modern scientific and engineering workflows. Prior work in immersive environmentsmore » generally required either a head mounted display (HMD) system or a large projector-based implementation both of which have limitations in terms of cost, usability, or space requirements. The solution presented here provides an alternative platform providing a reasonable immersive experience that addresses those limitations. Our work brings together the needed hardware and software to create a fully integrated immersive display and interface system that can be readily deployed in laboratories and common workspaces. By doing so, it is now feasible for immersive technologies to be included in researchers’ day-to-day workflows. The IQ-Station sets the stage for much wider adoption of immersive environments outside the small communities of virtual reality centers.« less
Watkins, Nicholas; Kennedy, Mary; Lee, Nelson; O'Neill, Michael; Peavey, Erin; Ducharme, Maria; Padula, Cynthia
2012-05-01
This study explored the impact of unit design and healthcare information technology (HIT) on nursing workflow and patient-centered care (PCC). Healthcare information technology and unit layout-related predictors of nursing workflow and PCC were measured during a 3-phase study involving questionnaires and work sampling methods. Stepwise multiple linear regressions demonstrated several HIT and unit layout-related factors that impact nursing workflow and PCC.
Scientific Data Management (SDM) Center for Enabling Technologies. 2007-2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludascher, Bertram; Altintas, Ilkay
Over the past five years, our activities have both established Kepler as a viable scientific workflow environment and demonstrated its value across multiple science applications. We have published numerous peer-reviewed papers on the technologies highlighted in this short paper and have given Kepler tutorials at SC06,SC07,SC08,and SciDAC 2007. Our outreach activities have allowed scientists to learn best practices and better utilize Kepler to address their individual workflow problems. Our contributions to advancing the state-of-the-art in scientific workflows have focused on the following areas. Progress in each of these areas is described in subsequent sections. Workflow development. The development of amore » deeper understanding of scientific workflows "in the wild" and of the requirements for support tools that allow easy construction of complex scientific workflows; Generic workflow components and templates. The development of generic actors (i.e.workflow components and processes) which can be broadly applied to scientific problems; Provenance collection and analysis. The design of a flexible provenance collection and analysis infrastructure within the workflow environment; and, Workflow reliability and fault tolerance. The improvement of the reliability and fault-tolerance of workflow environments.« less
RetroPath2.0: A retrosynthesis workflow for metabolic engineers.
Delépine, Baudoin; Duigou, Thomas; Carbonell, Pablo; Faulon, Jean-Loup
2018-01-01
Synthetic biology applied to industrial biotechnology is transforming the way we produce chemicals. However, despite advances in the scale and scope of metabolic engineering, the research and development process still remains costly. In order to expand the chemical repertoire for the production of next generation compounds, a major engineering biology effort is required in the development of novel design tools that target chemical diversity through rapid and predictable protocols. Addressing that goal involves retrosynthesis approaches that explore the chemical biosynthetic space. However, the complexity associated with the large combinatorial retrosynthesis design space has often been recognized as the main challenge hindering the approach. Here, we provide RetroPath2.0, an automated open source workflow for retrosynthesis based on generalized reaction rules that perform the retrosynthesis search from chassis to target through an efficient and well-controlled protocol. Its easiness of use and the versatility of its applications make this tool a valuable addition to the biological engineer bench desk. We show through several examples the application of the workflow to biotechnological relevant problems, including the identification of alternative biosynthetic routes through enzyme promiscuity or the development of biosensors. We demonstrate in that way the ability of the workflow to streamline retrosynthesis pathway design and its major role in reshaping the design, build, test and learn pipeline by driving the process toward the objective of optimizing bioproduction. The RetroPath2.0 workflow is built using tools developed by the bioinformatics and cheminformatics community, because it is open source we anticipate community contributions will likely expand further the features of the workflow. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Experimenting with semantic web services to understand the role of NLP technologies in healthcare.
Jagannathan, V
2006-01-01
NLP technologies can play a significant role in healthcare where a predominant segment of the clinical documentation is in text form. In a graduate course focused on understanding semantic web services at West Virginia University, a class project was designed with the purpose of exploring potential use for NLP-based abstraction of clinical documentation. The role of NLP-technology was simulated using human abstractors and various workflows were investigated using public domain workflow and semantic web service technologies. This poster explores the potential use of NLP and the role of workflow and semantic web technologies in developing healthcare IT environments.
Mirel, Barbara; Luo, Airong; Harris, Marcelline
2015-05-01
Collaborative research has many challenges. One under-researched challenge is how to align collaborators' research practices and evolving analytical reasoning with technologies and configurations of technologies that best support them. The goal of such alignment is to enhance collaborative problem solving capabilities in research. Toward this end, we draw on our own research and a synthesis of the literature to characterize the workflow of collaborating scientists in systems-level renal disease research. We describe the various phases of a hypothetical workflow among diverse collaborators within and across laboratories, extending from their primary analysis through secondary analysis. For each phase, we highlight required technology supports, and. At time, complementary organizational supports. This survey of supports matching collaborators' analysis practices and needs in research projects to technological support is preliminary, aimed ultimately at developing a research capability framework that can help scientists and technologists mutually understand workflows and technologies that can help enable and enhance them. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Clements, Oliver; Walker, Peter
2014-05-01
The cost of working with extremely large data sets is an increasingly important issue within the Earth Observation community. From global coverage data at any resolution to small coverage data at extremely high resolution, the community has always produced big data. This will only increase as new sensors are deployed and their data made available. Over time standard workflows have emerged. These have been facilitated by the production and adoption of standard technologies. Groups such as the International Organisation for Standardisation (ISO) and the Open Geospatial Consortium (OGC) have been a driving force in this area for many years. The production of standard protocols and interfaces such as OPeNDAP, Web Coverage Service (WCS), Web Processing Service (WPS) and the newer emerging standards such as Web Coverage Processing Service (WCPS) have helped to galvanise these workflows. An example of a traditional workflow, assume a researcher wants to assess the temporal trend in chlorophyll concentration. This would involve a discovery phase, an acquisition phase, a processing phase and finally a derived product or analysis phase. Each element of this workflow has an associated temporal and monetary cost. Firstly the researcher would require a high bandwidth connection or the acquisition phase would take too long. Secondly the researcher must have their own expensive equipment for use in the processing phase. Both of these elements cost money and time. This can make the whole process prohibitive to scientists from the developing world or "citizen scientists" that do not have the processing infrastructure necessary. The use of emerging technologies can help improve both the monetary and time costs associated with these existing workflows. By utilising a WPS that is hosted at the same location as the data a user is able to apply processing to the data without needing their own processing infrastructure. This however limits the user to predefined processes that are made available by the data provider. The emerging OGC WCPS standard combined with big data analytics engines may provide a mechanism to improve this situation. The technology allows users to create their own queries using an SQL like query language and apply them over available large data archive, once again at the data providers end. This not only removes the processing cost whilst still allowing user defined processes it also reduces the bandwidth required, as only the final analysis or derived product needs to be downloaded. The maturity of the new technologies is a stage where their use should be justified by a quantitative assessment rather than simply by the fact that they are new developments. We will present a study of the time and cost requirements for a selection of existing workflows and then show how new/emerging standards and technologies can help to both reduce the cost to the user by shifting processing to the data, and reducing the required bandwidth for analysing large datasets, making analysis of big-data archives possible for a greater and more diverse audience.
PGen: large-scale genomic variations analysis workflow and browser in SoyKB.
Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti
2016-10-06
With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most efficient analysis of soybean data using thorough testing and validation. This research serves as an example of best practices for development of genomics data analysis workflows by integrating remote HPC resources and efficient data management with ease of use for biological users. PGen workflow can also be easily customized for analysis of data in other species.
Business process re-engineering a cardiology department.
Bakshi, Syed Murtuza Hussain
2014-01-01
The health care sector is the world's third largest industry and is facing several problems such as excessive waiting times for patients, lack of access to information, high costs of delivery and medical errors. Health care managers seek the help of process re-engineering methods to discover the best processes and to re-engineer existing processes to optimize productivity without compromising on quality. Business process re-engineering refers to the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality and speed. The present study is carried out at a tertiary care corporate hospital with 1000-plus-bed facility. A descriptive study and case study method is used with intensive, careful and complete observation of patient flow, delays, short comings in patient movement and workflow. Data is collected through observations, informal interviews and analyzed by matrix analysis. Flowcharts were drawn for the various work activities of the cardiology department including workflow of the admission process, workflow in the ward and ICCU, workflow of the patient for catheterization laboratory procedure, and in the billing and discharge process. The problems of the existing system were studied and necessary suggestions were recommended to cardiology department module with an illustrated flowchart.
Beyond usability: designing effective technology implementation systems to promote patient safety.
Karsh, B-T
2004-10-01
Evidence is emerging that certain technologies such as computerized provider order entry may reduce the likelihood of patient harm. However, many technologies that should reduce medical errors have been abandoned because of problems with their design, their impact on workflow, and general dissatisfaction with them by end users. Patient safety researchers have therefore looked to human factors engineering for guidance on how to design technologies to be usable (easy to use) and useful (improving job performance, efficiency, and/or quality). While this is a necessary step towards improving the likelihood of end user satisfaction, it is still not sufficient. Human factors engineering research has shown that the manner in which technologies are implemented also needs to be designed carefully if benefits are to be realized. This paper reviews the theoretical knowledge on what leads to successful technology implementation and how this can be translated into specifically designed processes for successful technology change. The literature on diffusion of innovations, technology acceptance, organisational justice, participative decision making, and organisational change is reviewed and strategies for promoting successful implementation are provided. Given the rapid and ever increasing pace of technology implementation in health care, it is critical for the science of technology implementation to be understood and incorporated into efforts to improve patient safety.
Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.
Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell
2011-07-26
Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.
a Workflow for UAV's Integration Into a Geodesign Platform
NASA Astrophysics Data System (ADS)
Anca, P.; Calugaru, A.; Alixandroae, I.; Nazarie, R.
2016-06-01
This paper presents a workflow for the development of various Geodesign scenarios. The subject is important in the context of identifying patterns and designing solutions for a Smart City with optimized public transportation, efficient buildings, efficient utilities, recreational facilities a.s.o.. The workflow describes the procedures starting with acquiring data in the field, data processing, orthophoto generation, DTM generation, integration into a GIS platform and analyzing for a better support for Geodesign. Esri's City Engine is used mostly for 3D modeling capabilities that enable the user to obtain 3D realistic models. The workflow uses as inputs information extracted from images acquired using UAVs technologies, namely eBee, existing 2D GIS geodatabases, and a set of CGA rules. The method that we used further, is called procedural modeling, and uses rules in order to extrude buildings, the street network, parcel zoning and side details, based on the initial attributes from the geodatabase. The resulted products are various scenarios for redesigning, for analyzing new exploitation sites. Finally, these scenarios can be published as interactive web scenes for internal, groups or pubic consultation. In this way, problems like the impact of new constructions being build, re-arranging green spaces or changing routes for public transportation, etc. are revealed through impact and visibility analysis or shadowing analysis and are brought to the citizen's attention. This leads to better decisions.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
Clinical Decision Support Systems (CDSS) for preventive management of COPD patients.
Velickovski, Filip; Ceccaroni, Luigi; Roca, Josep; Burgos, Felip; Galdiz, Juan B; Marina, Nuria; Lluch-Ariet, Magí
2014-11-28
The use of information and communication technologies to manage chronic diseases allows the application of integrated care pathways, and the optimization and standardization of care processes. Decision support tools can assist in the adherence to best-practice medicine in critical decision points during the execution of a care pathway. The objectives are to design, develop, and assess a clinical decision support system (CDSS) offering a suite of services for the early detection and assessment of chronic obstructive pulmonary disease (COPD), which can be easily integrated into a healthcare providers' work-flow. The software architecture model for the CDSS, interoperable clinical-knowledge representation, and inference engine were designed and implemented to form a base CDSS framework. The CDSS functionalities were iteratively developed through requirement-adjustment/development/validation cycles using enterprise-grade software-engineering methodologies and technologies. Within each cycle, clinical-knowledge acquisition was performed by a health-informatics engineer and a clinical-expert team. A suite of decision-support web services for (i) COPD early detection and diagnosis, (ii) spirometry quality-control support, (iii) patient stratification, was deployed in a secured environment on-line. The CDSS diagnostic performance was assessed using a validation set of 323 cases with 90% specificity, and 96% sensitivity. Web services were integrated in existing health information system platforms. Specialized decision support can be offered as a complementary service to existing policies of integrated care for chronic-disease management. The CDSS was able to issue recommendations that have a high degree of accuracy to support COPD case-finding. Integration into healthcare providers' work-flow can be achieved seamlessly through the use of a modular design and service-oriented architecture that connect to existing health information systems.
Clinical Decision Support Systems (CDSS) for preventive management of COPD patients
2014-01-01
Background The use of information and communication technologies to manage chronic diseases allows the application of integrated care pathways, and the optimization and standardization of care processes. Decision support tools can assist in the adherence to best-practice medicine in critical decision points during the execution of a care pathway. Objectives The objectives are to design, develop, and assess a clinical decision support system (CDSS) offering a suite of services for the early detection and assessment of chronic obstructive pulmonary disease (COPD), which can be easily integrated into a healthcare providers' work-flow. Methods The software architecture model for the CDSS, interoperable clinical-knowledge representation, and inference engine were designed and implemented to form a base CDSS framework. The CDSS functionalities were iteratively developed through requirement-adjustment/development/validation cycles using enterprise-grade software-engineering methodologies and technologies. Within each cycle, clinical-knowledge acquisition was performed by a health-informatics engineer and a clinical-expert team. Results A suite of decision-support web services for (i) COPD early detection and diagnosis, (ii) spirometry quality-control support, (iii) patient stratification, was deployed in a secured environment on-line. The CDSS diagnostic performance was assessed using a validation set of 323 cases with 90% specificity, and 96% sensitivity. Web services were integrated in existing health information system platforms. Conclusions Specialized decision support can be offered as a complementary service to existing policies of integrated care for chronic-disease management. The CDSS was able to issue recommendations that have a high degree of accuracy to support COPD case-finding. Integration into healthcare providers' work-flow can be achieved seamlessly through the use of a modular design and service-oriented architecture that connect to existing health information systems. PMID:25471545
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience
Stockton, David B.; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.
Stockton, David B; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.
The Symbiotic Relationship between Scientific Workflow and Provenance (Invited)
NASA Astrophysics Data System (ADS)
Stephan, E.
2010-12-01
The purpose of this presentation is to describe the symbiotic nature of scientific workflows and provenance. We will also discuss the current trends and real world challenges facing these two distinct research areas. Although motivated differently, the needs of the international science communities are the glue that binds this relationship together. Understanding and articulating the science drivers to these communities is paramount as these technologies evolve and mature. Originally conceived for managing business processes, workflows are now becoming invaluable assets in both computational and experimental sciences. These reconfigurable, automated systems provide essential technology to perform complex analyses by coupling together geographically distributed disparate data sources and applications. As a result, workflows are capable of higher throughput in a shorter amount of time than performing the steps manually. Today many different workflow products exist; these could include Kepler and Taverna or similar products like MeDICI, developed at PNNL, that are standardized on the Business Process Execution Language (BPEL). Provenance, originating from the French term Provenir “to come from”, is used to describe the curation process of artwork as art is passed from owner to owner. The concept of provenance was adopted by digital libraries as a means to track the lineage of documents while standards such as the DublinCore began to emerge. In recent years the systems science community has increasingly expressed the need to expand the concept of provenance to formally articulate the history of scientific data. Communities such as the International Provenance and Annotation Workshop (IPAW) have formalized a provenance data model. The Open Provenance Model, and the W3C is hosting a provenance incubator group featuring the Proof Markup Language. Although both workflows and provenance have risen from different communities and operate independently, their mutual success is tied together, forming a symbiotic relationship where research and development advances in one effort can provide tremendous benefits to the other. For example, automating provenance extraction within scientific applications is still a relatively new concept; the workflow engine provides the framework to capture application specific operations, inputs, and resulting data. It provides a description of the process history and data flow by wrapping workflow components around the applications and data sources. On the other hand, a lack of cooperation between workflows and provenance can inhibit usefulness of both to science. Blindly tracking the execution history without having a true understanding of what kinds of questions end users may have makes the provenance indecipherable to the target users. Over the past nine years PNNL has been actively involved in provenance research in support of computational chemistry, molecular dynamics, biology, hydrology, and climate. PNNL has also been actively involved in efforts by the international community to develop open standards for provenance and the development of architectures to support provenance capture, storage, and querying. This presentation will provide real world use cases of how provenance and workflow can be leveraged and implemented to meet different needs and the challenges that lie ahead.
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.
2008-12-01
NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.
2009-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
Towards an intelligent hospital environment: OR of the future.
Sutherland, Jeffrey V; van den Heuvel, Willem-Jan; Ganous, Tim; Burton, Matthew M; Kumar, Animesh
2005-01-01
Patients, providers, payers, and government demand more effective and efficient healthcare services, and the healthcare industry needs innovative ways to re-invent core processes. Business process reengineering (BPR) showed adopting new hospital information systems can leverage this transformation and workflow management technologies can automate process management. Our research indicates workflow technologies in healthcare require real time patient monitoring, detection of adverse events, and adaptive responses to breakdown in normal processes. Adaptive workflow systems are rarely implemented making current workflow implementations inappropriate for healthcare. The advent of evidence based medicine, guideline based practice, and better understanding of cognitive workflow combined with novel technologies including Radio Frequency Identification (RFID), mobile/wireless technologies, internet workflow, intelligent agents, and Service Oriented Architectures (SOA) opens up new and exciting ways of automating business processes. Total situational awareness of events, timing, and location of healthcare activities can generate self-organizing change in behaviors of humans and machines. A test bed of a novel approach towards continuous process management was designed for the new Weinburg Surgery Building at the University of Maryland Medical. Early results based on clinical process mapping and analysis of patient flow bottlenecks demonstrated 100% improvement in delivery of supplies and instruments at surgery start time. This work has been directly applied to the design of the DARPA Trauma Pod research program where robotic surgery will be performed on wounded soldiers on the battlefield.
Radiology information system: a workflow-based approach.
Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; van der Aalst, W M P
2009-09-01
Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare.
Measuring, managing and maximizing refinery performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bascur, O.A.; Kennedy, J.P.
1996-01-01
Implementing continuous quality improvement is a confluence of total quality management, people empowerment, performance indicators and information engineering. Supporting information technologies allow a refiner to narrow the gap between management objectives and the process control level. Dynamic performance monitoring benefits come from production cost savings, improved communications and enhanced decision making. A refinery workgroup information flow model helps automate continuous improvement of processes, performance and the organization. The paper discusses the rethinking of refinery operations, dynamic performance monitoring, continuous process improvement, the knowledge coordinator and repository manager, an integrated plant operations workflow, and successful implementation.
An end-to-end workflow for engineering of biological networks from high-level specifications.
Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun
2012-08-17
We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.
A technology training protocol for meeting QSEN goals: Focusing on meaningful learning.
Luo, Shuhong; Kalman, Melanie
2018-01-01
The purpose of this paper is to describe and discuss how we designed and developed a 12-step technology training protocol. The protocol is meant to improve meaningful learning in technology education so that nursing students are able to meet the informatics requirements of Quality and Safety Education in Nursing competencies. When designing and developing the training protocol, we used a simplified experiential learning model that addressed the core features of meaningful learning: to connect new knowledge with students' prior knowledge and real-world workflow. Before training, we identified students' prior knowledge and workflow tasks. During training, students learned by doing, reflected on their prior computer skills and workflow, designed individualized procedures for integration into their workflow, and practiced the self-designed procedures in real-world settings. The trainer was a facilitator who provided a meaningful learning environment, asked the right questions to guide reflective conversation, and offered scaffoldings at critical moments. This training protocol could significantly improve nurses' competencies in using technologies and increase their desire to adopt new technologies. © 2017 Wiley Periodicals, Inc.
An Integrated Cyberenvironment for Event-Driven Environmental Observatory Research and Education
NASA Astrophysics Data System (ADS)
Myers, J.; Minsker, B.; Butler, R.
2006-12-01
National environmental observatories will soon provide large-scale data from diverse sensor networks and community models. While much attention is focused on piping data from sensors to archives and users, truly integrating these resources into the everyday research activities of scientists and engineers across the community, and enabling their results and innovations to be brought back into the observatory, also critical to long-term success of the observatories, is often neglected. This talk will give an overview of the Environmental Cyberinfrastructure Demonstrator (ECID) Cyberenvironment for observatory-centric environmental research and education, under development at the National Center for Supercomputing Applications (NCSA), which is designed to address these issues. Cyberenvironments incorporate collaboratory and grid technologies, web services, and other cyberinfrastructure into an overall framework that balances needs for efficient coordination and the ability to innovate. They are designed to support the full scientific lifecycle both in terms of individual experiments moving from data to workflows to publication and at the macro level where new discoveries lead to additional data, models, tools, and conceptual frameworks that augment and evolve community-scale systems such as observatories. The ECID cyberenvironment currently integrates five major components a collaborative portal, workflow engine, event manager, metadata repository, and social network personalization capabilities - that have novel features inspired by the Cyberenvironment concept and enabling powerful environmental research scenarios. A summary of these components and the overall cyberenvironment will be given in this talk, while other posters will give details on several of the components. The summary will be presented within the context of environmental use case scenarios created in collaboration with researchers from the WATERS (WATer and Environmental Research Systems) Network, a joint National Science Foundation-funded initiative of the hydrology and environmental engineering communities. The use case scenarios include identifying sensor anomalies in point- and streaming sensor data and notifying data managers in near-real time; and referring users of data or data products (e.g., workflows, publications) to related data or data products.
Examining the Multi-level Fit between Work and Technology in a Secure Messaging Implementation.
Ozkaynak, Mustafa; Johnson, Sharon; Shimada, Stephanie; Petrakis, Beth Ann; Tulu, Bengisu; Archambeault, Cliona; Fix, Gemmae; Schwartz, Erin; Woods, Susan
2014-01-01
Secure messaging (SM) allows patients to communicate with their providers for non-urgent health issues. Like other health information technologies, the design and implementation of SM should account for workflow to avoid suboptimal outcomes. SM may present unique workflow challenges because patients add a layer of complexity, as they are also direct users of the system. This study explores SM implementation at two Veterans Health Administration facilities. We interviewed twenty-nine members of eight primary care teams using semi-structured interviews. Questions addressed staff opinions about the integration of SM with daily practice, and team members' attitudes and experiences with SM. We describe the clinical workflow for SM, examining complexity and variability. We identified eight workflow issues directly related to efficiency and patient satisfaction, based on an exploration of the technology fit with multilevel factors. These findings inform organizational interventions that will accommodate SM implementation and lead to more patient-centered care.
NASA Astrophysics Data System (ADS)
Harris, A. T.; Ramachandran, R.; Maskey, M.
2013-12-01
The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.
A Model of Workflow Composition for Emergency Management
NASA Astrophysics Data System (ADS)
Xin, Chen; Bin-ge, Cui; Feng, Zhang; Xue-hui, Xu; Shan-shan, Fu
The common-used workflow technology is not flexible enough in dealing with concurrent emergency situations. The paper proposes a novel model for defining emergency plans, in which workflow segments appear as a constituent part. A formal abstraction, which contains four operations, is defined to compose workflow segments under constraint rule. The software system of the business process resources construction and composition is implemented and integrated into Emergency Plan Management Application System.
Re-Engineering Complex Legacy Systems at NASA
NASA Technical Reports Server (NTRS)
Ruszkowski, James; Meshkat, Leila
2010-01-01
The Flight Production Process (FPP) Re-engineering project has established a Model-Based Systems Engineering (MBSE) methodology and the technological infrastructure for the design and development of a reference, product-line architecture as well as an integrated workflow model for the Mission Operations System (MOS) for human space exploration missions at NASA Johnson Space Center. The design and architectural artifacts have been developed based on the expertise and knowledge of numerous Subject Matter Experts (SMEs). The technological infrastructure developed by the FPP Re-engineering project has enabled the structured collection and integration of this knowledge and further provides simulation and analysis capabilities for optimization purposes. A key strength of this strategy has been the judicious combination of COTS products with custom coding. The lean management approach that has led to the success of this project is based on having a strong vision for the whole lifecycle of the project and its progress over time, a goal-based design and development approach, a small team of highly specialized people in areas that are critical to the project, and an interactive approach for infusing new technologies into existing processes. This project, which has had a relatively small amount of funding, is on the cutting edge with respect to the utilization of model-based design and systems engineering. An overarching challenge that was overcome by this project was to convince upper management of the needs and merits of giving up more conventional design methodologies (such as paper-based documents and unwieldy and unstructured flow diagrams and schedules) in favor of advanced model-based systems engineering approaches.
Towards usable and interdisciplinary e-infrastructure (Invited)
NASA Astrophysics Data System (ADS)
de Roure, D.
2010-12-01
e-Science and cyberinfrastucture at their outset tended to focus on ‘big science’ and cross-organisational infrastructures, demonstrating complex engineering with the promise of high returns. It soon became evident that the key to researchers harnessing new technology for everyday use is a user-centric approach which empowers the user - both from a developer and an end user viewpoint. For example, this philosophy is demonstrated in workflow systems for systematic data processing and in the Web 2.0 approach as exemplified by the myExperiment social web site for sharing workflows, methods and ‘research objects’. Hence the most disruptive aspect of Cloud and virtualisation is perhaps that they make new computational resources and applications usable, creating a flourishing ecosystem for routine processing and innovation alike - and in this we must consider software sustainability. This talk will discuss the changing nature of e-Science digital ecosystem, focus on the e-infrastructure for cross-disciplinary work, and highlight issues in sustainable software development in this context.
Biowep: a workflow enactment portal for bioinformatics applications.
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-03-08
The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.
Biowep: a workflow enactment portal for bioinformatics applications
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-01-01
Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563
Home care and technology: a case study.
Stroulia, Eleni; Nikolaidisa, Ioanis; Liua, Lili; King, Sharla; Lessard, Lysanne
2012-01-01
Health care aides (HCAs) are the backbone of the home care system and provide a range of services to people who, for various reasons related to chronic conditions and aging, are not able to take care of themselves independently. The demand for HCA services will increase and the current HCA supply will likely not keep up with this increasing demand without fundamental changes in the current environment. Information and communication technology (ICT) can address some of the workflow challenges HCAs face. In this project, we conducted an ethnographic study to document and analyse HCAs' workflows and team interactions. Based on our findings, we designed an ICT tool suite, integrating easily available existing and newly developed (by our team) technologies to address these issues. Finally, we simulated the deployment of our technologies, to assess the potential impact of these technological solutions on the workflow and productivity of HCAs, their healthcare teams and client care.
Prototype of Kepler Processing Workflows For Microscopy And Neuroinformatics
Astakhov, V.; Bandrowski, A.; Gupta, A.; Kulungowski, A.W.; Grethe, J.S.; Bouwer, J.; Molina, T.; Rowley, V.; Penticoff, S.; Terada, M.; Wong, W.; Hakozaki, H.; Kwon, O.; Martone, M.E.; Ellisman, M.
2016-01-01
We report on progress of employing the Kepler workflow engine to prototype “end-to-end” application integration workflows that concern data coming from microscopes deployed at the National Center for Microscopy Imaging Research (NCMIR). This system is built upon the mature code base of the Cell Centered Database (CCDB) and integrated rule-oriented data system (IRODS) for distributed storage. It provides integration with external projects such as the Whole Brain Catalog (WBC) and Neuroscience Information Framework (NIF), which benefit from NCMIR data. We also report on specific workflows which spawn from main workflows and perform data fusion and orchestration of Web services specific for the NIF project. This “Brain data flow” presents a user with categorized information about sources that have information on various brain regions. PMID:28479932
NASA Astrophysics Data System (ADS)
Lengyel, F.; Yang, P.; Rosenzweig, B.; Vorosmarty, C. J.
2012-12-01
The Northeast Regional Earth System Model (NE-RESM, NSF Award #1049181) integrates weather research and forecasting models, terrestrial and aquatic ecosystem models, a water balance/transport model, and mesoscale and energy systems input-out economic models developed by interdisciplinary research team from academia and government with expertise in physics, biogeochemistry, engineering, energy, economics, and policy. NE-RESM is intended to forecast the implications of planning decisions on the region's environment, ecosystem services, energy systems and economy through the 21st century. Integration of model components and the development of cyberinfrastructure for interacting with the system is facilitated with the integrated Rule Oriented Data System (iRODS), a distributed data grid that provides archival storage with metadata facilities and a rule-based workflow engine for automating and auditing scientific workflows.
Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
Inferring Clinical Workflow Efficiency via Electronic Medical Record Utilization
Chen, You; Xie, Wei; Gunter, Carl A; Liebovitz, David; Mehrotra, Sanjay; Zhang, He; Malin, Bradley
2015-01-01
Complexity in clinical workflows can lead to inefficiency in making diagnoses, ineffectiveness of treatment plans and uninformed management of healthcare organizations (HCOs). Traditional strategies to manage workflow complexity are based on measuring the gaps between workflows defined by HCO administrators and the actual processes followed by staff in the clinic. However, existing methods tend to neglect the influences of EMR systems on the utilization of workflows, which could be leveraged to optimize workflows facilitated through the EMR. In this paper, we introduce a framework to infer clinical workflows through the utilization of an EMR and show how such workflows roughly partition into four types according to their efficiency. Our framework infers workflows at several levels of granularity through data mining technologies. We study four months of EMR event logs from a large medical center, including 16,569 inpatient stays, and illustrate that over approximately 95% of workflows are efficient and that 80% of patients are on such workflows. At the same time, we show that the remaining 5% of workflows may be inefficient due to a variety of factors, such as complex patients. PMID:26958173
NASA Astrophysics Data System (ADS)
Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.
New roles & responsibilities of hospital biomedical engineering.
Frisch, P H; Stone, B; Booth, P; Lui, W
2014-01-01
Over the last decade the changing healthcare environment has required hospitals and specifically Biomedical Engineering to critically evaluate, optimize and adapt their operations. The focus is now on new technologies, changes to the environment of care, support requirements and financial constraints. Memorial Sloan Kettering Cancer Center (MSKCC), an NIH-designated comprehensive cancer center, has been transitioning to an increasing outpatient care environment. This transition is driving an increase in-patient acuity coupled with the need for added urgency of support and response time. New technologies, regulatory requirements and financial constraints have impacted operating budgets and in some cases, resulted in a reduction in staffing. Specific initiatives, such as the Joint Commission's National Patient Safety Goals, requirements for an electronic medical record, meaningful use and ICD10 have caused institutions to reevaluate their operations and processes including requiring Biomedical Engineering to manage new technologies, integrations and changes in the electromagnetic environment, while optimizing operational workflow and resource utilization. This paper addresses the new and expanding responsibilities and approach of Biomedical Engineering organizations, specifically at MSKCC. It is suggested that our experience may be a template for other organizations facing similar problems. Increasing support is necessary for Medical Software - Medical Device Data Systems in the evolving wireless environment, including RTLS and RFID. It will be necessary to evaluate the potential impact on the growing electromagnetic environment, on connectivity resulting in the need for dynamic and interactive testing and the growing demand to establish new and needed operational synergies with Information Technology operations and other operational groups within the institution, such as nursing, facilities management, central supply, and the user departments.
Addressing informatics challenges in Translational Research with workflow technology.
Beaulah, Simon A; Correll, Mick A; Munro, Robin E J; Sheldon, Jonathan G
2008-09-01
Interest in Translational Research has been growing rapidly in recent years. In this collision of different data, technologies and cultures lie tremendous opportunities for the advancement of science and business for organisations that are able to integrate, analyse and deliver this information effectively to users. Workflow-based integration and analysis systems are becoming recognised as a fast and flexible way to build applications that are tailored to scientific areas, yet are built on a common platform. Workflow systems are allowing organisations to meet the key informatics challenges in Translational Research and improve disease understanding and patient care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Cornett, Alex; Kuziemsky, Craig
2015-01-01
Implementing team based workflows can be complex because of the scope of providers involved and the extent of information exchange and communication that needs to occur. While a workflow may represent the ideal structure of communication that needs to occur, information issues and contextual factors may impact how the workflow is implemented in practice. Understanding these issues will help us better design systems to support team based workflows. In this paper we use a case study of palliative sedation therapy (PST) to model a PST workflow and then use it to identify purposes of communication, information issues and contextual factors that impact them. We then suggest how our findings could inform health information technology (HIT) design to support team based communication workflows.
A WorkFlow Engine Oriented Modeling System for Hydrologic Sciences
NASA Astrophysics Data System (ADS)
Lu, B.; Piasecki, M.
2009-12-01
In recent years the use of workflow engines for carrying out modeling and data analyses tasks has gained increased attention in the science and engineering communities. Tasks like processing raw data coming from sensors and passing these raw data streams to filters for QA/QC procedures possibly require multiple and complicated steps that need to be repeated over and over again. A workflow sequence that carries out a number of steps of various complexity is an ideal approach to deal with these tasks because the sequence can be stored, called up and repeated over again and again. This has several advantages: for one it ensures repeatability of processing steps and with that provenance, an issue that is increasingly important in the science and engineering communities. It also permits the hand off of lengthy and time consuming tasks that can be error prone to a chain of processing actions that are carried out automatically thus reducing the chance for error on the one side and freeing up time to carry out other tasks on the other hand. This paper aims to present the development of a workflow engine embedded modeling system which allows to build up working sequences for carrying out numerical modeling tasks regarding to hydrologic science. Trident, which facilitates creating, running and sharing scientific data analysis workflows, is taken as the central working engine of the modeling system. Current existing functionalities of the modeling system involve digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. They are stored as sequences or modules respectively. The sequences can be invoked to implement their preset tasks in orders, for example, triangulating a watershed from raw DEM. Whereas the modules encapsulated certain functions can be selected and connected through a GUI workboard to form sequences. This modeling system is demonstrated by setting up a new sequence for simulating rainfall-runoff processes which involves embedded Penn State Integrated Hydrologic Model(PIHM) module for hydrologic simulation as a kernel, DEM processing sub-sequence which prepares geospatial data for PIHM, data retrieval module which access time series data from online data repository via web services or from local database, post- data management module which stores , visualizes and analyzes model outputs.
Procedural Modeling for Rapid-Prototyping of Multiple Building Phases
NASA Astrophysics Data System (ADS)
Saldana, M.; Johanson, C.
2013-02-01
RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.
Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.
2017-10-01
A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.
Workflow computing. Improving management and efficiency of pathology diagnostic services.
Buffone, G J; Moreau, D; Beck, J R
1996-04-01
Traditionally, information technology in health care has helped practitioners to collect, store, and present information and also to add a degree of automation to simple tasks (instrument interfaces supporting result entry, for example). Thus commercially available information systems do little to support the need to model, execute, monitor, coordinate, and revise the various complex clinical processes required to support health-care delivery. Workflow computing, which is already implemented and improving the efficiency of operations in several nonmedical industries, can address the need to manage complex clinical processes. Workflow computing not only provides a means to define and manage the events, roles, and information integral to health-care delivery but also supports the explicit implementation of policy or rules appropriate to the process. This article explains how workflow computing may be applied to health-care and the inherent advantages of the technology, and it defines workflow system requirements for use in health-care delivery with special reference to diagnostic pathology.
Illumina Production Sequencing at the DOE Joint Genome Institute - Workflow and Optimizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarver, Angela; Fern, Alison; Diego, Matthew San
2010-06-18
The U.S. Department of Energy (DOE) Joint Genome Institute?s (JGI) Production Sequencing group is committed to the generation of high-quality genomic DNA sequence to support the DOE mission areas of renewable energy generation, global carbon management, and environmental characterization and clean-up. Within the JGI?s Production Sequencing group, the Illumina Genome Analyzer pipeline has been established as one of three sequencing platforms, along with Roche/454 and ABI/Sanger. Optimization of the Illumina pipeline has been ongoing with the aim of continual process improvement of the laboratory workflow. These process improvement projects are being led by the JGI?s Process Optimization, Sequencing Technologies, Instrumentation&more » Engineering, and the New Technology Production groups. Primary focus has been on improving the procedural ergonomics and the technicians? operating environment, reducing manually intensive technician operations with different tools, reducing associated production costs, and improving the overall process and generated sequence quality. The U.S. DOE JGI was established in 1997 in Walnut Creek, CA, to unite the expertise and resources of five national laboratories? Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, and Pacific Northwest ? along with HudsonAlpha Institute for Biotechnology. JGI is operated by the University of California for the U.S. DOE.« less
Johnson, Kevin B; Lorenzi, Nancy M
2011-01-01
Objective The goal of this study was to develop an in-depth understanding of how a health information exchange (HIE) fits into clinical workflow at multiple clinical sites. Materials and Methods The ethnographic qualitative study was conducted over a 9-month period in six emergency departments (ED) and eight ambulatory clinics in Memphis, Tennessee, USA. Data were collected using direct observation, informal interviews during observation, and formal semi-structured interviews. The authors observed for over 180 h, during which providers used the exchange 130 times. Results HIE-related workflow was modeled for each ED site and ambulatory clinic group and substantial site-to-site workflow differences were identified. Common patterns in HIE-related workflow were also identified across all sites, leading to the development of two role-based workflow models: nurse based and physician based. The workflow elements framework was applied to the two role-based patterns. An in-depth description was developed of how providers integrated HIE into existing clinical workflow, including prompts for HIE use. Discussion Workflow differed substantially among sites, but two general role-based HIE usage models were identified. Although providers used HIE to improve continuity of patient care, patient–provider trust played a significant role. Types of information retrieved related to roles, with nurses seeking to retrieve recent hospitalization data and more open-ended usage by nurse practitioners and physicians. User and role-specific customization to accommodate differences in workflow and information needs may increase the adoption and use of HIE. Conclusion Understanding end users' perspectives towards HIE technology is crucial to the long-term success of HIE. By applying qualitative methods, an in-depth understanding of HIE usage was developed. PMID:22003156
de Bruin, Jeroen S; Adlassnig, Klaus-Peter; Leitich, Harald; Rappelsberger, Andrea
2018-01-01
Evidence-based clinical guidelines have a major positive effect on the physician's decision-making process. Computer-executable clinical guidelines allow for automated guideline marshalling during a clinical diagnostic process, thus improving the decision-making process. Implementation of a digital clinical guideline for the prevention of mother-to-child transmission of hepatitis B as a computerized workflow, thereby separating business logic from medical knowledge and decision-making. We used the Business Process Model and Notation language system Activiti for business logic and workflow modeling. Medical decision-making was performed by an Arden-Syntax-based medical rule engine, which is part of the ARDENSUITE software. We succeeded in creating an electronic clinical workflow for the prevention of mother-to-child transmission of hepatitis B, where institution-specific medical decision-making processes could be adapted without modifying the workflow business logic. Separation of business logic and medical decision-making results in more easily reusable electronic clinical workflows.
Capturing Essential Information to Achieve Safe Interoperability
Weininger, Sandy; Jaffe, Michael B.; Rausch, Tracy; Goldman, Julian M.
2016-01-01
In this article we describe the role of “clinical scenario” information to assure the safety of interoperable systems, as well as the system’s ability to deliver the requisite clinical functionality to improve clinical care. Described are methods and rationale for capturing the clinical needs, workflow, hazards, and device interactions in the clinical environment. Key user (clinician and clinical engineer) needs and system requirements can be derived from this information, therefore improving the communication from clinicians to medical device and information technology system developers. This methodology is intended to assist the health care community, including researchers, standards developers, regulators, and manufacturers, by providing clinical definition to support requirements in the systems engineering process, particularly those focusing on development of Integrated Clinical Environments described in standard ASTM F2761. Our focus is on identifying and documenting relevant interactions and medical device capabilities within the system using a documentation tool called medical device interface data sheets (MDIDSa) and mitigating hazardous situations related to workflow, product usability, data integration, and the lack of effective medical device-health information technology system integration to achieve safe interoperability. Portions of the analysis of a clinical scenario for a “Patient-controlled analgesia safety interlock” are provided to illustrate the method. Collecting better clinical adverse event information and proposed solutions can help identify opportunities to improve current device capabilities and interoperability and support a Learning Health System to improve health care delivery. Developing and analyzing clinical scenarios are the first steps in creating solutions to address vexing patient safety problems and enable clinical innovation. A web-based research tool for implementing a means of acquiring and managing this information, the Clinical Scenario Repository™, is described. PMID:27387840
Capturing Essential Information to Achieve Safe Interoperability.
Weininger, Sandy; Jaffe, Michael B; Rausch, Tracy; Goldman, Julian M
2017-01-01
In this article, we describe the role of "clinical scenario" information to assure the safety of interoperable systems, as well as the system's ability to deliver the requisite clinical functionality to improve clinical care. Described are methods and rationale for capturing the clinical needs, workflow, hazards, and device interactions in the clinical environment. Key user (clinician and clinical engineer) needs and system requirements can be derived from this information, therefore, improving the communication from clinicians to medical device and information technology system developers. This methodology is intended to assist the health care community, including researchers, standards developers, regulators, and manufacturers, by providing clinical definition to support requirements in the systems engineering process, particularly those focusing on development of Integrated Clinical Environments described in standard ASTM F2761. Our focus is on identifying and documenting relevant interactions and medical device capabilities within the system using a documentation tool called medical device interface data sheets and mitigating hazardous situations related to workflow, product usability, data integration, and the lack of effective medical device-health information technology system integration to achieve safe interoperability. Portions of the analysis of a clinical scenario for a "patient-controlled analgesia safety interlock" are provided to illustrate the method. Collecting better clinical adverse event information and proposed solutions can help identify opportunities to improve current device capabilities and interoperability and support a learning health system to improve health care delivery. Developing and analyzing clinical scenarios are the first steps in creating solutions to address vexing patient safety problems and enable clinical innovation. A Web-based research tool for implementing a means of acquiring and managing this information, the Clinical Scenario Repository™ (MD PnP Program), is described.
Exploring Dental Providers’ Workflow in an Electronic Dental Record Environment
Schwei, Kelsey M; Cooper, Ryan; Mahnke, Andrea N.; Ye, Zhan
2016-01-01
Summary Background A workflow is defined as a predefined set of work steps and partial ordering of these steps in any environment to achieve the expected outcome. Few studies have investigated the workflow of providers in a dental office. It is important to understand the interaction of dental providers with the existing technologies at point of care to assess breakdown in the workflow which could contribute to better technology designs. Objective The study objective was to assess electronic dental record (EDR) workflows using time and motion methodology in order to identify breakdowns and opportunities for process improvement. Methods A time and motion methodology was used to study the human-computer interaction and workflow of dental providers with an EDR in four dental centers at a large healthcare organization. A data collection tool was developed to capture the workflow of dental providers and staff while they interacted with an EDR during initial, planned, and emergency patient visits, and at the front desk. Qualitative and quantitative analysis was conducted on the observational data. Results Breakdowns in workflow were identified while posting charges, viewing radiographs, e-prescribing, and interacting with patient scheduler. EDR interaction time was significantly different between dentists and dental assistants (6:20 min vs. 10:57 min, p = 0.013) and between dentists and dental hygienists (6:20 min vs. 9:36 min, p = 0.003). Conclusions On average, a dentist spent far less time than dental assistants and dental hygienists in data recording within the EDR. PMID:27437058
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.
2011-12-01
Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.
Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.
2007-01-01
Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.
A Scalable, Open Source Platform for Data Processing, Archiving and Dissemination
2016-01-01
Object Oriented Data Technology (OODT) big data toolkit developed by NASA and the Work-flow INstance Generation and Selection (WINGS) scientific work...to several challenge big data problems and demonstrated the utility of OODT-WINGS in addressing them. Specific demonstrated analyses address i...source software, Apache, Object Oriented Data Technology, OODT, semantic work-flows, WINGS, big data , work- flow management 16. SECURITY CLASSIFICATION OF
Cheng, Gong; Lu, Quan; Ma, Ling; Zhang, Guocai; Xu, Liang; Zhou, Zongshan
2017-01-01
Recently, Docker technology has received increasing attention throughout the bioinformatics community. However, its implementation has not yet been mastered by most biologists; accordingly, its application in biological research has been limited. In order to popularize this technology in the field of bioinformatics and to promote the use of publicly available bioinformatics tools, such as Dockerfiles and Images from communities, government sources, and private owners in the Docker Hub Registry and other Docker-based resources, we introduce here a complete and accurate bioinformatics workflow based on Docker. The present workflow enables analysis and visualization of pan-genomes and biosynthetic gene clusters of bacteria. This provides a new solution for bioinformatics mining of big data from various publicly available biological databases. The present step-by-step guide creates an integrative workflow through a Dockerfile to allow researchers to build their own Image and run Container easily.
Cheng, Gong; Zhang, Guocai; Xu, Liang
2017-01-01
Recently, Docker technology has received increasing attention throughout the bioinformatics community. However, its implementation has not yet been mastered by most biologists; accordingly, its application in biological research has been limited. In order to popularize this technology in the field of bioinformatics and to promote the use of publicly available bioinformatics tools, such as Dockerfiles and Images from communities, government sources, and private owners in the Docker Hub Registry and other Docker-based resources, we introduce here a complete and accurate bioinformatics workflow based on Docker. The present workflow enables analysis and visualization of pan-genomes and biosynthetic gene clusters of bacteria. This provides a new solution for bioinformatics mining of big data from various publicly available biological databases. The present step-by-step guide creates an integrative workflow through a Dockerfile to allow researchers to build their own Image and run Container easily. PMID:29204317
Guitton, Yann; Tremblay-Franco, Marie; Le Corguillé, Gildas; Martin, Jean-François; Pétéra, Mélanie; Roger-Mele, Pierrick; Delabrière, Alexis; Goulitquer, Sophie; Monsoor, Misharl; Duperier, Christophe; Canlet, Cécile; Servien, Rémi; Tardivel, Patrick; Caron, Christophe; Giacomoni, Franck; Thévenot, Etienne A
2017-12-01
Metabolomics is a key approach in modern functional genomics and systems biology. Due to the complexity of metabolomics data, the variety of experimental designs, and the multiplicity of bioinformatics tools, providing experimenters with a simple and efficient resource to conduct comprehensive and rigorous analysis of their data is of utmost importance. In 2014, we launched the Workflow4Metabolomics (W4M; http://workflow4metabolomics.org) online infrastructure for metabolomics built on the Galaxy environment, which offers user-friendly features to build and run data analysis workflows including preprocessing, statistical analysis, and annotation steps. Here we present the new W4M 3.0 release, which contains twice as many tools as the first version, and provides two features which are, to our knowledge, unique among online resources. First, data from the four major metabolomics technologies (i.e., LC-MS, FIA-MS, GC-MS, and NMR) can be analyzed on a single platform. By using three studies in human physiology, alga evolution, and animal toxicology, we demonstrate how the 40 available tools can be easily combined to address biological issues. Second, the full analysis (including the workflow, the parameter values, the input data and output results) can be referenced with a permanent digital object identifier (DOI). Publication of data analyses is of major importance for robust and reproducible science. Furthermore, the publicly shared workflows are of high-value for e-learning and training. The Workflow4Metabolomics 3.0 e-infrastructure thus not only offers a unique online environment for analysis of data from the main metabolomics technologies, but it is also the first reference repository for metabolomics workflows. Copyright © 2017 Elsevier Ltd. All rights reserved.
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Digital transformation in home care. A case study.
Bennis, Sandy; Costanzo, Diane; Flynn, Ann Marie; Reidy, Agatha; Tronni, Catherine
2007-01-01
Simply implementing software and technology does not assure that an organization's targeted clinical and financial goals will be realized. No longer is it possible to roll out a new system--by solely providing end user training and overlaying it on top of already inefficient workflows and outdated roles--and know with certainty that targets will be met. At Virtua Health's Home Care, based in south New Jersey, implementation of their electronic system initially followed this more traditional approach. Unable to completely attain their earlier identified return on investment, they enlisted the help of a new role within their health system, that of the nurse informaticist. Knowledgeable in complex clinical processes and not bound by the technology at hand, the informaticist analyzed physical workflow, digital workflow, roles and physical layout. Leveraging specific tools such as change acceleration, workouts and LEAN, the informaticist was able to redesign workflow and support new levels of functionality. This article provides a view from the "finish line", recounting how this role worked with home care to assimilate information delivery into more efficient processes and align resources to support the new workflow, ultimately achieving real tangible returns.
Enabling Smart Workflows over Heterogeneous ID-Sensing Technologies
Giner, Pau; Cetina, Carlos; Lacuesta, Raquel; Palacios, Guillermo
2012-01-01
Sensing technologies in mobile devices play a key role in reducing the gap between the physical and the digital world. The use of automatic identification capabilities can improve user participation in business processes where physical elements are involved (Smart Workflows). However, identifying all objects in the user surroundings does not automatically translate into meaningful services to the user. This work introduces Parkour, an architecture that allows the development of services that match the goals of each of the participants in a smart workflow. Parkour is based on a pluggable architecture that can be extended to provide support for new tasks and technologies. In order to facilitate the development of these plug-ins, tools that automate the development process are also provided. Several Parkour-based systems have been developed in order to validate the applicability of the proposal. PMID:23202193
NASA Technical Reports Server (NTRS)
1997-01-01
CENTRA 2000 Inc., a wholly owned subsidiary of Auto-trol technology, obtained permission to use software originally developed at Johnson Space Center for the Space Shuttle and early Space Station projects. To support their enormous information-handling needs, a product data management, electronic document management and work-flow system was designed. Initially, just 33 database tables comprised the original software, which was later expanded to about 100 tables. This system, now called CENTRA 2000, is designed for quick implementation and supports the engineering process from preliminary design through release-to-production. CENTRA 2000 can also handle audit histories and provides a means to ensure new information is distributed. The product has 30 production sites worldwide.
Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.
Hartman, Douglas J
2015-06-01
Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2015 Elsevier Inc. All rights reserved.
Enhancing and Customizing Laboratory Information Systems to Improve/Enhance Pathologist Workflow.
Hartman, Douglas J
2016-03-01
Optimizing pathologist workflow can be difficult because it is affected by many variables. Surgical pathologists must complete many tasks that culminate in a final pathology report. Several software systems can be used to enhance/improve pathologist workflow. These include voice recognition software, pre-sign-out quality assurance, image utilization, and computerized provider order entry. Recent changes in the diagnostic coding and the more prominent role of centralized electronic health records represent potential areas for increased ways to enhance/improve the workflow for surgical pathologists. Additional unforeseen changes to the pathologist workflow may accompany the introduction of whole-slide imaging technology to the routine diagnostic work. Copyright © 2016 Elsevier Inc. All rights reserved.
2011-01-01
The Quadruple Aim: Working Together, Achieving Success 2011 Military Health System Conference TMA and Services Using Technology, Clinical Workflow...Redesign, and Team Solutions to Achieve the Patient Centered Medical Home LTC Nicole Kerkenbush, MHA, MN Army Medical Department, Office of the...Surgeon General Chief Medical Information Officer 1 Military Health System Conference Report Documentation Page Form ApprovedOMB No. 0704-0188 Public
Computer imaging and workflow systems in the business office.
Adams, W T; Veale, F H; Helmick, P M
1999-05-01
Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.
It's All About the Data: Workflow Systems and Weather
NASA Astrophysics Data System (ADS)
Plale, B.
2009-05-01
Digital data is fueling new advances in the computational sciences, particularly geospatial research as environmental sensing grows more practical through reduced technology costs, broader network coverage, and better instruments. e-Science research (i.e., cyberinfrastructure research) has responded to data intensive computing with tools, systems, and frameworks that support computationally oriented activities such as modeling, analysis, and data mining. Workflow systems support execution of sequences of tasks on behalf of a scientist. These systems, such as Taverna, Apache ODE, and Kepler, when built as part of a larger cyberinfrastructure framework, give the scientist tools to construct task graphs of execution sequences, often through a visual interface for connecting task boxes together with arcs representing control flow or data flow. Unlike business processing workflows, scientific workflows expose a high degree of detail and control during configuration and execution. Data-driven science imposes unique needs on workflow frameworks. Our research is focused on two issues. The first is the support for workflow-driven analysis over all kinds of data sets, including real time streaming data and locally owned and hosted data. The second is the essential role metadata/provenance collection plays in data driven science, for discovery, determining quality, for science reproducibility, and for long-term preservation. The research has been conducted over the last 6 years in the context of cyberinfrastructure for mesoscale weather research carried out as part of the Linked Environments for Atmospheric Discovery (LEAD) project. LEAD has pioneered new approaches for integrating complex weather data, assimilation, modeling, mining, and cyberinfrastructure systems. Workflow systems have the potential to generate huge volumes of data. Without some form of automated metadata capture, either metadata description becomes largely a manual task that is difficult if not impossible under high-volume conditions, or the searchability and manageability of the resulting data products is disappointingly low. The provenance of a data product is a record of its lineage, or trace of the execution history that resulted in the product. The provenance of a forecast model result, e.g., captures information about the executable version of the model, configuration parameters, input data products, execution environment, and owner. Provenance enables data to be properly attributed and captures critical parameters about the model run so the quality of the result can be ascertained. Proper provenance is essential to providing reproducible scientific computing results. Workflow languages used in science discovery are complete programming languages, and in theory can support any logic expressible by a programming language. The execution environments supporting the workflow engines, on the other hand, are subject to constraints on physical resources, and hence in practice the workflow task graphs used in science utilize relatively few of the cataloged workflow patterns. It is important to note that these workflows are executed on demand, and are executed once. Into this context is introduced the need for science discovery that is responsive to real time information. If we can use simple programming models and abstractions to make scientific discovery involving real-time data accessible to specialists who share and utilize data across scientific domains, we bring science one step closer to solving the largest of human problems.
Pohjonen, Hanna; Ross, Peeter; Blickman, Johan G; Kamman, Richard
2007-01-01
Emerging technologies are transforming the workflows in healthcare enterprises. Computing grids and handheld mobile/wireless devices are providing clinicians with enterprise-wide access to all patient data and analysis tools on a pervasive basis. In this paper, emerging technologies are presented that provide computing grids and streaming-based access to image and data management functions, and system architectures that enable pervasive computing on a cost-effective basis. Finally, the implications of such technologies are investigated regarding the positive impacts on clinical workflows.
Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology
Rohde, Gustavo K.; Ozolek, John A.; Parwani, Anil V.; Pantanowitz, Liron
2014-01-01
Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms. PMID:25250190
Carnegie Mellon University bioimaging day 2014: Challenges and opportunities in digital pathology.
Rohde, Gustavo K; Ozolek, John A; Parwani, Anil V; Pantanowitz, Liron
2014-01-01
Recent advances in digital imaging is impacting the practice of pathology. One of the key enabling technologies that is leading the way towards this transformation is the use of whole slide imaging (WSI) which allows glass slides to be converted into large image files that can be shared, stored, and analyzed rapidly. Many applications around this novel technology have evolved in the last decade including education, research and clinical applications. This publication highlights a collection of abstracts, each corresponding to a talk given at Carnegie Mellon University's (CMU) Bioimaging Day 2014 co-sponsored by the Biomedical Engineering and Lane Center for Computational Biology Departments at CMU. Topics related specifically to digital pathology are presented in this collection of abstracts. These include topics related to digital workflow implementation, imaging and artifacts, storage demands, and automated image analysis algorithms.
Single-use thermoplastic microfluidic burst valves enabling on-chip reagent storage
Rahmanian, Omid D.
2014-01-01
A simple and reliable method for fabricating single-use normally closed burst valves in thermoplastic microfluidic devices is presented, using a process flow that is readily integrated into established workflows for the fabrication of thermoplastic microfluidics. An experimental study of valve performance reveals the relationships between valve geometry and burst pressure. The technology is demonstrated in a device employing multiple valves engineered to actuate at different inlet pressures that can be generated using integrated screw pumps. On-chip storage and reconstitution of fluorescein salt sealed within defined reagent chambers are demonstrated. By taking advantage of the low gas and water permeability of cyclic olefin copolymer, the robust burst valves allow on-chip hermetic storage of reagents, making the technology well suited for the development of integrated and disposable assays for use at the point of care. PMID:25972774
NASA Astrophysics Data System (ADS)
Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.
Sipes, Carolyn; Baker, Joy Don
2015-09-01
This collaborative study sought to describe technology used by AORN members at work, inclusive of radio-frequency identification or barcode scanning (RFID), data collection tools (DATA), workflow or dashboard management tools (DASHBOARD), and environmental services/room decontamination technologies (ENVIRON), and to identify the perceived effects of each technology on workflow efficiency (WFE) and quality patient care (QPC). The 462 respondents to the AORN Technology in the OR survey reported use of technology (USE) in all categories. Eleven of 17 RFID items had a strong positive correlation between the designated USE item and the perceived effect on WFE and QPC. Five of the most-used technology items were found in the DATA category. Two of the five related to Intraoperative Nursing Documentation and the use of the Perioperative Nursing Data Set. The other three related to Imaging Integration for Radiology Equipment, Video Camera Systems, and Fiber-optic Systems. All three elements explored in the DASHBOARD category (ie, Patient Update, OR Case, OR Efficiency) demonstrated approximately 50% or greater perceived effectiveness in WFE and QPC. There was a low reported use of ENVIRON technologies, resulting in limited WFE and QPC data for this category. Copyright © 2015 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Automating radiologist workflow, part 3: education and training.
Reiner, Bruce
2008-12-01
The current model for radiologist education consists largely of mentorship during residency, followed by peer-to-peer training thereafter. The traditional focus of this radiologist education has historically been restricted to anatomy, pathology, and imaging modality. This "human" mentoring model becomes a limiting factor in the current practice environment because of rapid and dramatic changes in imaging and information technologies, along with the increased time demands placed on practicing radiologists. One novel way to address these burgeoning education and training challenges is to leverage technology, with the creation of user-specific and context-specific automated workflow templates. These automated templates would provide a low-stress, time-efficient, and easy-to-use equivalent of "computerized" mentoring. A radiologist could identify the workflow template of interest on the basis of the specific computer application, pathology, anatomy, or modality of interest. While the corresponding workflow template is activated, the radiologist "student" could effectively start and stop at areas of interest and use the functionality of an electronic wizard to identify additional educational resource of interest. An additional training feature of the technology is the ability to review "proven" cases for the purposes of establishing competence and credentialing.
Highlights of X-Stack ExM Deliverable: MosaStore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ripeanu, Matei
2016-07-20
This brief report highlights the experience gained with MosaStore, an exploratory part of the X-Stack project “ExM: System support for extreme-scale, many-task applications”. The ExM project proposed to use concurrent workflows supported by the Swift language and runtime as an innovative programming model to exploit parallelism in exascale computers. MosaStore aims to support this endeavor by improving storage support for workflow-based applications, more precisely by exploring the gains that can be obtained from co-designing the storage system and the workflow runtime engine. MosaStore has been developed primarily at the University of British Columbia.
Coupling between a multi-physics workflow engine and an optimization framework
NASA Astrophysics Data System (ADS)
Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.
2016-03-01
A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.
How Can We Better Detect Unauthorized GMOs in Food and Feed Chains?
Fraiture, Marie-Alice; Herman, Philippe; De Loose, Marc; Debode, Frédéric; Roosens, Nancy H
2017-06-01
Current GMO detection systems have limited abilities to detect unauthorized genetically modified organisms (GMOs). Here, we propose a new workflow, based on next-generation sequencing (NGS) technology, to overcome this problem. In providing information about DNA sequences, this high-throughput workflow can distinguish authorized and unauthorized GMOs by strengthening the tools commonly used by enforcement laboratories with the help of NGS technology. In addition, thanks to its massive sequencing capacity, this workflow could be used to monitor GMOs present in the food and feed chain. In view of its potential implementation by enforcement laboratories, we discuss this innovative approach, its current limitations, and its sustainability of use over time. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vedeneev, V. V.; Kolotnikov, M. E.; Mossakovskii, P. A.; Kostyreva, L. A.; Abdukhakimov, F. A.; Makarov, P. V.; Pyhalov, A. A.; Dudaev, M. A.
2018-01-01
In this paper we present a complex numerical workflow for analysis of blade flutter and high-amplitude resonant oscillations, impenetrability of casing if the blade is broken off, and the rotor reaction to the blade detachment and following misbalance, with the assessment of a safe flight possibility at the auto-rotation regime. All the methods used are carefully verified by numerical convergence study and correlations with experiments. The use of the workflow developed significantly improves the efficiency of the design process of modern jet engine compressors. It ensures a significant reduction of time and cost of the compressor design with the required level of strength and durability.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing.
Cole, Brian S; Moore, Jason H
2018-03-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing
Moore, Jason H.
2018-01-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416
Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure
2016-01-01
Background Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. Objective The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. Methods We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. Results We identified 5 high-level macrocognitive processes affecting medication management—sensemaking, planning, coordination, monitoring, and decision making—and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Conclusions Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation. PMID:27733331
Medication Management: The Macrocognitive Workflow of Older Adults With Heart Failure.
Mickelson, Robin S; Unertl, Kim M; Holden, Richard J
2016-10-12
Older adults with chronic disease struggle to manage complex medication regimens. Health information technology has the potential to improve medication management, but only if it is based on a thorough understanding of the complexity of medication management workflow as it occurs in natural settings. Prior research reveals that patient work related to medication management is complex, cognitive, and collaborative. Macrocognitive processes are theorized as how people individually and collaboratively think in complex, adaptive, and messy nonlaboratory settings supported by artifacts. The objective of this research was to describe and analyze the work of medication management by older adults with heart failure, using a macrocognitive workflow framework. We interviewed and observed 61 older patients along with 30 informal caregivers about self-care practices including medication management. Descriptive qualitative content analysis methods were used to develop categories, subcategories, and themes about macrocognitive processes used in medication management workflow. We identified 5 high-level macrocognitive processes affecting medication management-sensemaking, planning, coordination, monitoring, and decision making-and 15 subprocesses. Data revealed workflow as occurring in a highly collaborative, fragile system of interacting people, artifacts, time, and space. Process breakdowns were common and patients had little support for macrocognitive workflow from current tools. Macrocognitive processes affected medication management performance. Describing and analyzing this performance produced recommendations for technology supporting collaboration and sensemaking, decision making and problem detection, and planning and implementation.
Workflow and web application for annotating NCBI BioProject transcriptome data
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A.; Barrero, Luz S.; Landsman, David
2017-01-01
Abstract The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. Database URL: http://www.ncbi.nlm.nih.gov/projects/physalis/ PMID:28605765
A novel spectral library workflow to enhance protein identifications.
Li, Haomin; Zong, Nobel C; Liang, Xiangbo; Kim, Allen K; Choi, Jeong Ho; Deng, Ning; Zelaya, Ivette; Lam, Maggie; Duan, Huilong; Ping, Peipei
2013-04-09
The innovations in mass spectrometry-based investigations in proteome biology enable systematic characterization of molecular details in pathophysiological phenotypes. However, the process of delineating large-scale raw proteomic datasets into a biological context requires high-throughput data acquisition and processing. A spectral library search engine makes use of previously annotated experimental spectra as references for subsequent spectral analyses. This workflow delivers many advantages, including elevated analytical efficiency and specificity as well as reduced demands in computational capacity. In this study, we created a spectral matching engine to address challenges commonly associated with a library search workflow. Particularly, an improved sliding dot product algorithm, that is robust to systematic drifts of mass measurement in spectra, is introduced. Furthermore, a noise management protocol distinguishes spectra correlation attributed from noise and peptide fragments. It enables elevated separation between target spectral matches and false matches, thereby suppressing the possibility of propagating inaccurate peptide annotations from library spectra to query spectra. Moreover, preservation of original spectra also accommodates user contributions to further enhance the quality of the library. Collectively, this search engine supports reproducible data analyses using curated references, thereby broadening the accessibility of proteomics resources to biomedical investigators. This article is part of a Special Issue entitled: From protein structures to clinical applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Maserat, Elham; Seied Farajollah, Seiede Sedigheh; Safdari, Reza; Ghazisaeedi, Marjan; Aghdaei, Hamid Asadzadeh; Zali, Mohammad Reza
2015-01-01
Colorectal cancer is a major cause of morbidity and mortality throughout the world. Colorectal cancer screening is an optimal way for reducing of morbidity and mortality and a clinical decision support system (CDSS) plays an important role in predicting success of screening processes. DSS is a computer-based information system that improves the delivery of preventive care services. The aim of this article was to detail engineering of information requirements and work flow design of CDSS for a colorectal cancer screening program. In the first stage a screening minimum data set was determined. Developed and developing countries were analyzed for identifying this data set. Then information deficiencies and gaps were determined by check list. The second stage was a qualitative survey with a semi-structured interview as the study tool. A total of 15 users and stakeholders' perspectives about workflow of CDSS were studied. Finally workflow of DSS of control program was designed by standard clinical practice guidelines and perspectives. Screening minimum data set of national colorectal cancer screening program was defined in five sections, including colonoscopy data set, surgery, pathology, genetics and pedigree data set. Deficiencies and information gaps were analyzed. Then we designed a work process standard of screening. Finally workflow of DSS and entry stage were determined. A CDSS facilitates complex decision making for screening and has key roles in designing optimal interactions between colonoscopy, pathology and laboratory departments. Also workflow analysis is useful to identify data reconciliation strategies to address documentation gaps. Following recommendations of CDSS should improve quality of colorectal cancer screening.
Using technology to improve and support communication and workflow processes.
Bahlman, Deborah Tuke; Johnson, Fay C
2005-07-01
In conjunction with a large expansion project, a team of perioperative staff members reviewed their workflow processes and designed their ideal patient tracking and communication system. Technologies selected and deployed included a passive infrared tracking system, an enhanced nurse call system, wireless telephones, and a web-based electronic grease board. The new system provides staff members with an easy way to obtain critical pieces of patient information, as well as track the progress of patients and locate equipment.
Evolution of Medication Administration Workflow in Implementing Electronic Health Record System
ERIC Educational Resources Information Center
Huang, Yuan-Han
2013-01-01
This study focused on the clinical workflow evolutions when implementing the health information technology (HIT). The study especially emphasized on administrating medication when the electronic health record (EHR) systems were adopted at rural healthcare facilities. Mixed-mode research methods, such as survey, observation, and focus group, were…
GUEST EDITOR'S INTRODUCTION: Guest Editor's introduction
NASA Astrophysics Data System (ADS)
Chrysanthis, Panos K.
1996-12-01
Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260, USA This special issue focuses on current efforts to represent and support workflows that integrate information systems and human resources within a business or manufacturing enterprise. Workflows may also be viewed as an emerging computational paradigm for effective structuring of cooperative applications involving human users and access to diverse data types not necessarily maintained by traditional database management systems. A workflow is an automated organizational process (also called business process) which consists of a set of activities or tasks that need to be executed in a particular controlled order over a combination of heterogeneous database systems and legacy systems. Within workflows, tasks are performed cooperatively by either human or computational agents in accordance with their roles in the organizational hierarchy. The challenge in facilitating the implementation of workflows lies in developing efficient workflow management systems. A workflow management system (also called workflow server, workflow engine or workflow enactment system) provides the necessary interfaces for coordination and communication among human and computational agents to execute the tasks involved in a workflow and controls the execution orderings of tasks as well as the flow of data that these tasks manipulate. That is, the workflow management system is responsible for correctly and reliably supporting the specification, execution, and monitoring of workflows. The six papers selected (out of the twenty-seven submitted for this special issue of Distributed Systems Engineering) address different aspects of these three functional components of a workflow management system. In the first paper, `Correctness issues in workflow management', Kamath and Ramamritham discuss the important issue of correctness in workflow management that constitutes a prerequisite for the use of workflows in the automation of the critical organizational/business processes. In particular, this paper examines the issues of execution atomicity and failure atomicity, differentiating between correctness requirements of system failures and logical failures, and surveys techniques that can be used to ensure data consistency in workflow management systems. While the first paper is concerned with correctness assuming transactional workflows in which selective transactional properties are associated with individual tasks or the entire workflow, the second paper, `Scheduling workflows by enforcing intertask dependencies' by Attie et al, assumes that the tasks can be either transactions or other activities involving legacy systems. This second paper describes the modelling and specification of conditions involving events and dependencies among tasks within a workflow using temporal logic and finite state automata. It also presents a scheduling algorithm that enforces all stated dependencies by executing at any given time only those events that are allowed by all the dependency automata and in an order as specified by the dependencies. In any system with decentralized control, there is a need to effectively cope with the tension that exists between autonomy and consistency requirements. In `A three-level atomicity model for decentralized workflow management systems', Ben-Shaul and Heineman focus on the specific requirement of enforcing failure atomicity in decentralized, autonomous and interacting workflow management systems. Their paper describes a model in which each workflow manager must be able to specify the sequence of tasks that comprise an atomic unit for the purposes of correctness, and the degrees of local and global atomicity for the purpose of cooperation with other workflow managers. The paper also discusses a realization of this model in which treaties and summits provide an agreement mechanism, while underlying transaction managers are responsible for maintaining failure atomicity. The fourth and fifth papers are experience papers describing a workflow management system and a large scale workflow application, respectively. Schill and Mittasch, in `Workflow management systems on top of OSF DCE and OMG CORBA', describe a decentralized workflow management system and discuss its implementation using two standardized middleware platforms, namely, OSF DCE and OMG CORBA. The system supports a new approach to workflow management, introducing several new concepts such as data type management for integrating various types of data and quality of service for various services provided by servers. A problem common to both database applications and workflows is the handling of missing and incomplete information. This is particularly pervasive in an `electronic market' with a huge number of retail outlets producing and exchanging volumes of data, the application discussed in `Information flow in the DAMA project beyond database managers: information flow managers'. Motivated by the need for a method that allows a task to proceed in a timely manner if not all data produced by other tasks are available by its deadline, Russell et al propose an architectural framework and a language that can be used to detect, approximate and, later on, to adjust missing data if necessary. The final paper, `The evolution towards flexible workflow systems' by Nutt, is complementary to the other papers and is a survey of issues and of work related to both workflow and computer supported collaborative work (CSCW) areas. In particular, the paper provides a model and a categorization of the dimensions which workflow management and CSCW systems share. Besides summarizing the recent advancements towards efficient workflow management, the papers in this special issue suggest areas open to investigation and it is our hope that they will also provide the stimulus for further research and development in the area of workflow management systems.
The potential of quantum technology gravity sensors in civil engineering
NASA Astrophysics Data System (ADS)
Tuckwell, G.; Metje, N.; Boddice, D.; Usher, C.
2017-12-01
Potential field techniques have advantages over active geophysical techniques as they are not limited to the depth they can image features, provided the signals of interest are detectable amongst the other variations recorded by the instrument. A new generation of gravity instruments based on quantum technology promise greatly increased measurement sensitivity, but with this comes significant challenges in data processing and noise suppression. In the UK Innovate UK funded SIGMA project (http://www.rsksigma.co.uk/) the field of opportunity for a step change in gravity sensor accuracy has been evaluated by comparison with existing geophysical sensors, identifying the range of targets and depths of interest to commercial end users that are currently undetectable and might become visible. Forward modelling was used to quantify the potential of a Quantum Technology (QT) gravity and gravity gradiometer sensor. A substantive improvement in detectability of targets is predicted, which can be considered as a factor of 1.5 to 2 increase in the depth of detectability, or in the reduction of the size of the feature of interest. To take further advantage of new instrument sensitivity, new survey workflows are required. The accuracy of measured gravity maps is limited by environmental vibration noise, and by the accuracy with which tidal variations and terrain signals can be removed. It is still common practice in engineering scale surveys for gravity values to be reduced to Bouguer residuals. However, with a more sensitive instrument comes the need to measure the terrain more accurately. This can be achieved within a commercially viable workflow using a laser scanner for rapid data acquisition and advanced processing to produce an accurate DEM. Initial tests on 4 commercial sites have shown that an improvement of 10s of mGal can be achieved if applying a full digital terrain model correction to the microgravity data even on sites with very minor topographic height variations. At the same time, the new algorithms developed by the project can reduce the computational time by a factor of 20. This will have implications on the commercial viability of a QT gravity instrument.
Standardization in synthetic biology: an engineering discipline coming of age.
Decoene, Thomas; De Paepe, Brecht; Maertens, Jo; Coussement, Pieter; Peters, Gert; De Maeseneire, Sofie L; De Mey, Marjan
2018-08-01
Leaping DNA read-and-write technologies, and extensive automation and miniaturization are radically transforming the field of biological experimentation by providing the tools that enable the cost-effective high-throughput required to address the enormous complexity of biological systems. However, standardization of the synthetic biology workflow has not kept abreast with dwindling technical and resource constraints, leading, for example, to the collection of multi-level and multi-omics large data sets that end up disconnected or remain under- or even unexploited. In this contribution, we critically evaluate the various efforts, and the (limited) success thereof, in order to introduce standards for defining, designing, assembling, characterizing, and sharing synthetic biology parts. The causes for this success or the lack thereof, as well as possible solutions to overcome these, are discussed. Akin to other engineering disciplines, extensive standardization will undoubtedly speed-up and reduce the cost of bioprocess development. In this respect, further implementation of synthetic biology standards will be crucial for the field in order to redeem its promise, i.e. to enable predictable forward engineering.
SMITH: a LIMS for handling next-generation sequencing workflows
2014-01-01
Background Life-science laboratories make increasing use of Next Generation Sequencing (NGS) for studying bio-macromolecules and their interactions. Array-based methods for measuring gene expression or protein-DNA interactions are being replaced by RNA-Seq and ChIP-Seq. Sequencing is generally performed by specialized facilities that have to keep track of sequencing requests, trace samples, ensure quality and make data available according to predefined privileges. An integrated tool helps to troubleshoot problems, to maintain a high quality standard, to reduce time and costs. Commercial and non-commercial tools called LIMS (Laboratory Information Management Systems) are available for this purpose. However, they often come at prohibitive cost and/or lack the flexibility and scalability needed to adjust seamlessly to the frequently changing protocols employed. In order to manage the flow of sequencing data produced at the Genomic Unit of the Italian Institute of Technology (IIT), we developed SMITH (Sequencing Machine Information Tracking and Handling). Methods SMITH is a web application with a MySQL server at the backend. Wet-lab scientists of the Centre for Genomic Science and database experts from the Politecnico of Milan in the context of a Genomic Data Model Project developed SMITH. The data base schema stores all the information of an NGS experiment, including the descriptions of all protocols and algorithms used in the process. Notably, an attribute-value table allows associating an unconstrained textual description to each sample and all the data produced afterwards. This method permits the creation of metadata that can be used to search the database for specific files as well as for statistical analyses. Results SMITH runs automatically and limits direct human interaction mainly to administrative tasks. SMITH data-delivery procedures were standardized making it easier for biologists and analysts to navigate the data. Automation also helps saving time. The workflows are available through an API provided by the workflow management system. The parameters and input data are passed to the workflow engine that performs de-multiplexing, quality control, alignments, etc. Conclusions SMITH standardizes, automates, and speeds up sequencing workflows. Annotation of data with key-value pairs facilitates meta-analysis. PMID:25471934
SMITH: a LIMS for handling next-generation sequencing workflows.
Venco, Francesco; Vaskin, Yuriy; Ceol, Arnaud; Muller, Heiko
2014-01-01
Life-science laboratories make increasing use of Next Generation Sequencing (NGS) for studying bio-macromolecules and their interactions. Array-based methods for measuring gene expression or protein-DNA interactions are being replaced by RNA-Seq and ChIP-Seq. Sequencing is generally performed by specialized facilities that have to keep track of sequencing requests, trace samples, ensure quality and make data available according to predefined privileges. An integrated tool helps to troubleshoot problems, to maintain a high quality standard, to reduce time and costs. Commercial and non-commercial tools called LIMS (Laboratory Information Management Systems) are available for this purpose. However, they often come at prohibitive cost and/or lack the flexibility and scalability needed to adjust seamlessly to the frequently changing protocols employed. In order to manage the flow of sequencing data produced at the Genomic Unit of the Italian Institute of Technology (IIT), we developed SMITH (Sequencing Machine Information Tracking and Handling). SMITH is a web application with a MySQL server at the backend. Wet-lab scientists of the Centre for Genomic Science and database experts from the Politecnico of Milan in the context of a Genomic Data Model Project developed SMITH. The data base schema stores all the information of an NGS experiment, including the descriptions of all protocols and algorithms used in the process. Notably, an attribute-value table allows associating an unconstrained textual description to each sample and all the data produced afterwards. This method permits the creation of metadata that can be used to search the database for specific files as well as for statistical analyses. SMITH runs automatically and limits direct human interaction mainly to administrative tasks. SMITH data-delivery procedures were standardized making it easier for biologists and analysts to navigate the data. Automation also helps saving time. The workflows are available through an API provided by the workflow management system. The parameters and input data are passed to the workflow engine that performs de-multiplexing, quality control, alignments, etc. SMITH standardizes, automates, and speeds up sequencing workflows. Annotation of data with key-value pairs facilitates meta-analysis.
NASA Astrophysics Data System (ADS)
Inzerillo, L.; Di Paola, F.
2017-08-01
In In the last years there has been an increasing use of digital techniques for conservation and restoration purposes. Among these, a very dominant rule is played by the use of digital photogrammetry packages (Agisoft Photoscan, 3D Zephir) which allow to obtain in few steps 3D textured models of real objects. Combined with digital documentation technologies digital fabrication technologies can be employed in a variety of ways to assist in heritage documentation, conservation and dissemination. This paper will give to practitioners an overview on the state of the art available technologies and a feasible workflow for optimizing point cloud and polygon mesh datasets for the purpose of fabrication using 3D printing. The goal is to give an important contribute to confer an automation aspect at the whole processing. We tried to individuate a workflow that should be applicable to several types of cases apart from small precautions. In our experimentation we used a DELTA WASP 2040 printer with PLA easyfil.
Towards a Unified Architecture for Data-Intensive Seismology in VERCE
NASA Astrophysics Data System (ADS)
Klampanos, I.; Spinuso, A.; Trani, L.; Krause, A.; Garcia, C. R.; Atkinson, M.
2013-12-01
Modern seismology involves managing, storing and processing large datasets, typically geographically distributed across organisations. Performing computational experiments using these data generates more data, which in turn have to be managed, further analysed and frequently be made available within or outside the scientific community. As part of the EU-funded project VERCE (http://verce.eu), we research and develop a number of use-cases, interfacing technologies to satisfy the data-intensive requirements of modern seismology. Our solution seeks to support: (1) familiar programming environments to develop and execute experiments, in particular via Python/ObsPy, (2) a unified view of heterogeneous computing resources, public or private, through the adoption of workflows, (3) monitoring the experiments and validating the data products at varying granularities, via a comprehensive provenance system, (4) reproducibility of experiments and consistency in collaboration, via a shared registry of processing units and contextual metadata (computing resources, data, etc.) Here, we provide a brief account of these components and their roles in the proposed architecture. Our design integrates heterogeneous distributed systems, while allowing researchers to retain current practices and control data handling and execution via higher-level abstractions. At the core of our solution lies the workflow language Dispel. While Dispel can be used to express workflows at fine detail, it may also be used as part of meta- or job-submission workflows. User interaction can be provided through a visual editor or through custom applications on top of parameterisable workflows, which is the approach VERCE follows. According to our design, the scientist may use versions of Dispel/workflow processing elements offered by the VERCE library or override them introducing custom scientific code, using ObsPy. This approach has the advantage that, while the scientist uses a familiar tool, the resulting workflow can be executed on a number of underlying stream-processing engines, such as STORM or OGSA-DAI, transparently. While making efficient use of arbitrarily distributed resources and large data-sets is of priority, such processing requires adequate provenance tracking and monitoring. Hiding computation and orchestration details via a workflow system, allows us to embed provenance harvesting where appropriate without impeding the user's regular working patterns. Our provenance model is based on the W3C PROV standard and can provide information of varying granularity regarding execution, systems and data consumption/production. A video demonstrating a prototype provenance exploration tool can be found at http://bit.ly/15t0Fz0. Keeping experimental methodology and results open and accessible, as well as encouraging reproducibility and collaboration, is of central importance to modern science. As our users are expected to be based at different geographical locations, to have access to different computing resources and to employ customised scientific codes, the use of a shared registry of workflow components, implementations, data and computing resources is critical.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
Using Kepler for Tool Integration in Microarray Analysis Workflows.
Gan, Zhuohui; Stowe, Jennifer C; Altintas, Ilkay; McCulloch, Andrew D; Zambon, Alexander C
Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.
Ergonomic design for dental offices.
Ahearn, David J; Sanders, Martha J; Turcotte, Claudia
2010-01-01
The increasing complexity of the dental office environment influences productivity and workflow for dental clinicians. Advances in technology, and with it the range of products needed to provide services, have led to sprawl in operatory setups and the potential for awkward postures for dental clinicians during the delivery of oral health services. Although ergonomics often addresses the prevention of musculoskeletal disorders for specific populations of workers, concepts of workflow and productivity are integral to improved practice in work environments. This article provides suggestions for improving workflow and productivity for dental clinicians. The article applies ergonomic principles to dental practice issues such as equipment and supply management, office design, and workflow management. Implications for improved ergonomic processes and future research are explored.
Describing and Modeling Workflow and Information Flow in Chronic Disease Care
Unertl, Kim M.; Weinger, Matthew B.; Johnson, Kevin B.; Lorenzi, Nancy M.
2009-01-01
Objectives The goal of the study was to develop an in-depth understanding of work practices, workflow, and information flow in chronic disease care, to facilitate development of context-appropriate informatics tools. Design The study was conducted over a 10-month period in three ambulatory clinics providing chronic disease care. The authors iteratively collected data using direct observation and semi-structured interviews. Measurements The authors observed all aspects of care in three different chronic disease clinics for over 150 hours, including 157 patient-provider interactions. Observation focused on interactions among people, processes, and technology. Observation data were analyzed through an open coding approach. The authors then developed models of workflow and information flow using Hierarchical Task Analysis and Soft Systems Methodology. The authors also conducted nine semi-structured interviews to confirm and refine the models. Results The study had three primary outcomes: models of workflow for each clinic, models of information flow for each clinic, and an in-depth description of work practices and the role of health information technology (HIT) in the clinics. The authors identified gaps between the existing HIT functionality and the needs of chronic disease providers. Conclusions In response to the analysis of workflow and information flow, the authors developed ten guidelines for design of HIT to support chronic disease care, including recommendations to pursue modular approaches to design that would support disease-specific needs. The study demonstrates the importance of evaluating workflow and information flow in HIT design and implementation. PMID:19717802
Data-driven discovery of new Dirac semimetal materials
NASA Astrophysics Data System (ADS)
Yan, Qimin; Chen, Ru; Neaton, Jeffrey
In recent years, a significant amount of materials property data from high-throughput computations based on density functional theory (DFT) and the application of database technologies have enabled the rise of data-driven materials discovery. In this work, we initiate the extension of the data-driven materials discovery framework to the realm of topological semimetal materials and to accelerate the discovery of novel Dirac semimetals. We implement current available and develop new workflows to data-mine the Materials Project database for novel Dirac semimetals with desirable band structures and symmetry protected topological properties. This data-driven effort relies on the successful development of several automatic data generation and analysis tools, including a workflow for the automatic identification of topological invariants and pattern recognition techniques to find specific features in a massive number of computed band structures. Utilizing this approach, we successfully identified more than 15 novel Dirac point and Dirac nodal line systems that have not been theoretically predicted or experimentally identified. This work is supported by the Materials Project Predictive Modeling Center through the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231.
Modeling workflow to design machine translation applications for public health practice
Turner, Anne M.; Brownstein, Megumu K.; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin
2014-01-01
Objective Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). Materials and Methods We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. Results The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. Discussion This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. Counclusion The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. PMID:25445922
Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics
Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe
2015-01-01
Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831
Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.
Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe
2015-05-01
The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.
A practical workflow for making anatomical atlases for biological research.
Wan, Yong; Lewis, A Kelsey; Colasanto, Mary; van Langeveld, Mark; Kardon, Gabrielle; Hansen, Charles
2012-01-01
The anatomical atlas has been at the intersection of science and art for centuries. These atlases are essential to biological research, but high-quality atlases are often scarce. Recent advances in imaging technology have made high-quality 3D atlases possible. However, until now there has been a lack of practical workflows using standard tools to generate atlases from images of biological samples. With certain adaptations, CG artists' workflow and tools, traditionally used in the film industry, are practical for building high-quality biological atlases. Researchers have developed a workflow for generating a 3D anatomical atlas using accessible artists' tools. They used this workflow to build a mouse limb atlas for studying the musculoskeletal system's development. This research aims to raise the awareness of using artists' tools in scientific research and promote interdisciplinary collaborations between artists and scientists. This video (http://youtu.be/g61C-nia9ms) demonstrates a workflow for creating an anatomical atlas.
Integrated Sensitivity Analysis Workflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.
2014-08-01
Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.
ERIC Educational Resources Information Center
Li, Wenhao
2011-01-01
Distributed workflow technology has been widely used in modern education and e-business systems. Distributed web applications have shown cross-domain and cooperative characteristics to meet the need of current distributed workflow applications. In this paper, the author proposes a dynamic and adaptive scheduling algorithm PCSA (Pre-Calculated…
Wakefield, Douglas S; Ward, Marcia M; Loes, Jean L; O'Brien, John
2010-01-01
We report how seven independent critical access hospitals collaborated with a rural referral hospital to standardize workflow policies and procedures while jointly implementing the same health information technologies (HITs) to enhance medication care processes. The study hospitals implemented the same electronic health record, computerized provider order entry, pharmacy information systems, automated dispensing cabinets (ADC), and barcode medication administration systems. We conducted interviews and examined project documents to explore factors underlying the successful implementation of ADC and barcode medication administration across the network hospitals. These included a shared culture of collaboration; strategic sequencing of HIT component implementation; interface among HIT components; strategic placement of ADCs; disciplined use and sharing of workflow analyses linked with HIT applications; planning for workflow efficiencies; acquisition of adequate supply of HIT-related devices; and establishing metrics to monitor HIT use and outcomes.
Rödiger, Matthias; Ziebolz, Dirk; Schmidt, Anne-Kathrin
2015-01-01
This case report describes the fabrication of monolithic all-ceramic restorations using zirconia-reinforced lithium silicate (ZLS) ceramics. The use of powder-free intraoral scanner, generative fabrication technology of the working model, and CAD/CAM of the restorations in the dental laboratory allows a completely digitized workflow. The newly introduced ZLS ceramics offer a unique combination of fracture strength (>420 MPa), excellent optical properties, and optimum polishing characteristics, thus making them an interesting material option for monolithic restorations in the digital workflow. PMID:26509088
Machine learning for fab automated diagnostics
NASA Astrophysics Data System (ADS)
Giollo, Manuel; Lam, Auguste; Gkorou, Dimitra; Liu, Xing Lan; van Haren, Richard
2017-06-01
Process optimization depends largely on field engineer's knowledge and expertise. However, this practice turns out to be less sustainable due to the fab complexity which is continuously increasing in order to support the extreme miniaturization of Integrated Circuits. On the one hand, process optimization and root cause analysis of tools is necessary for a smooth fab operation. On the other hand, the growth in number of wafer processing steps is adding a considerable new source of noise which may have a significant impact at the nanometer scale. This paper explores the ability of historical process data and Machine Learning to support field engineers in production analysis and monitoring. We implement an automated workflow in order to analyze a large volume of information, and build a predictive model of overlay variation. The proposed workflow addresses significant problems that are typical in fab production, like missing measurements, small number of samples, confounding effects due to heterogeneity of data, and subpopulation effects. We evaluate the proposed workflow on a real usecase and we show that it is able to predict overlay excursions observed in Integrated Circuits manufacturing. The chosen design focuses on linear and interpretable models of the wafer history, which highlight the process steps that are causing defective products. This is a fundamental feature for diagnostics, as it supports process engineers in the continuous improvement of the production line.
Future Directions in Medical Physics: Models, Technology, and Translation to Medicine
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey
The application of physics in medicine has been integral to major advances in diagnostic and therapeutic medicine. Two primary areas represent the mainstay of medical physics research in the last century: in radiation therapy, physicists have propelled advances in conformal radiation treatment and high-precision image guidance; and in diagnostic imaging, physicists have advanced an arsenal of multi-modality imaging that includes CT, MRI, ultrasound, and PET as indispensible tools for noninvasive screening, diagnosis, and assessment of treatment response. In addition to their role in building such technologically rich fields of medicine, physicists have also become integral to daily clinical practice in these areas. The future suggests new opportunities for multi-disciplinary research bridging physics, biology, engineering, and computer science, and collaboration in medical physics carries a strong capacity for identification of significant clinical needs, access to clinical data, and translation of technologies to clinical studies. In radiation therapy, for example, the extraction of knowledge from large datasets on treatment delivery, image-based phenotypes, genomic profile, and treatment outcome will require innovation in computational modeling and connection with medical physics for the curation of large datasets. Similarly in imaging physics, the demand for new imaging technology capable of measuring physical and biological processes over orders of magnitude in scale (from molecules to whole organ systems) and exploiting new contrast mechanisms for greater sensitivity to molecular agents and subtle functional / morphological change will benefit from multi-disciplinary collaboration in physics, biology, and engineering. Also in surgery and interventional radiology, where needs for increased precision and patient safety meet constraints in cost and workflow, development of new technologies for imaging, image registration, and robotic assistance can leverage collaboration in physics, biomedical engineering, and computer science. In each area, there is major opportunity for multi-disciplinary collaboration with medical physics to accelerate the translation of such technologies to clinical use. Research supported by the National Institutes of Health, Siemens Healthcare, and Carestream Health.
NASA Astrophysics Data System (ADS)
Gali, Raja L.; Roth, Christopher G.; Smith, Elizabeth; Dave, Jaydev K.
2018-03-01
In digital radiography, computed radiography (CR) technology is based on latent image capture by storage phosphors whereas direct radiography (DR) technology is based either on indirect conversion using a scintillator or direct conversion using a photoconductor. DR-based portable imaging systems may enhance workflow efficiency. The purpose of this work was to investigate changes in workflow efficiency at a tertiary healthcare center after transitioning from CR to DR technology for imaging with portable x-ray units. An IRB exemption was obtained. Data for all inpatient-radiographs acquired with portable x-ray units from July-2014 till June-2015 (period 1) with CR technology (AMX4 or AMX4+ portable unit from GE Healthcare, NX workstation from Agfa Healthcare for digitization), from July-2015 till June-2016 (period 2) with DR technology (Carestream DRX-Revolution x-ray units and DRX-1C image receptors) and from July-2016 till January-2017 (period 3; same DR technology) were extracted using Centricity RIS-IC (GE Healthcare). Duration between the imaging-examination scheduled time and completed time (timesch-com) was calculated and compared using non-parametric tests (between the three time periods with corrections for multiple comparisons; three time periods were used to identify if there were any other potential temporal trends not related to transitioning from CR to DR). IBM's SPSS package was used for statistical analysis. Overall data was obtained from 33131, 32194, and 18015 cases in periods 1, 2 and 3, respectively. Independent-Samples Kruskal-Wallis test revealed a statistically significant difference in timesch-com across the three time periods (χ2(2, n= 83,340) = 2053, p < 0.001). The timesch-com was highest for period 1 i.e., radiographs acquired with CR technology (median: 64 minutes) and it decreased significantly for radiographs acquired with DR technology in periods 2 (median: 49 minutes; p < 0.001) and 3 (median∶ 44 minutes; p < 0.001). Overall, adoption of DR technology resulted in a drop in timesch-com by 27% relative to the use of CR technology. Transitioning from CR to DR was associated with improved workflow efficiency for radiographic imaging with portable x-ray units.
SWS: accessing SRS sites contents through Web Services.
Romano, Paolo; Marra, Domenico
2008-03-26
Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.
Workflow and web application for annotating NCBI BioProject transcriptome data.
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo
2017-01-01
The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.
Advances in Maize Transformation Technologies and Development of Transgenic Maize
Yadava, Pranjal; Abhishek, Alok; Singh, Reeva; Singh, Ishwar; Kaul, Tanushri; Pattanayak, Arunava; Agrawal, Pawan K.
2017-01-01
Maize is the principal grain crop of the world. It is also the crop where genetic engineering has been employed to a great extent to improve its various traits. The ability to transform maize is a crucial step for application of gene technology in maize improvement. There have been constant improvements in the maize transformation technologies over past several years. The choice of genotype and the explant material to initiate transformation and the different types of media to be used in various stages of tissue culture can have significant impact on the outcomes of the transformation efforts. Various methods of gene transfer, like the particle bombardment, protoplast transformation, Agrobacterium-mediated, in planta transformation, etc., have been tried and improved over years. Similarly, various selection systems for retrieval of the transformants have been attempted. The commercial success of maize transformation and transgenic development is unmatched by any other crop so far. Maize transformation with newer gene editing technologies is opening up a fresh dimension in transformation protocols and work-flows. This review captures the various past and recent facets in improvement in maize transformation technologies and attempts to present a comprehensive updated picture of the current state of the art in this area. PMID:28111576
Advances in Maize Transformation Technologies and Development of Transgenic Maize.
Yadava, Pranjal; Abhishek, Alok; Singh, Reeva; Singh, Ishwar; Kaul, Tanushri; Pattanayak, Arunava; Agrawal, Pawan K
2016-01-01
Maize is the principal grain crop of the world. It is also the crop where genetic engineering has been employed to a great extent to improve its various traits. The ability to transform maize is a crucial step for application of gene technology in maize improvement. There have been constant improvements in the maize transformation technologies over past several years. The choice of genotype and the explant material to initiate transformation and the different types of media to be used in various stages of tissue culture can have significant impact on the outcomes of the transformation efforts. Various methods of gene transfer, like the particle bombardment, protoplast transformation, Agrobacterium -mediated, in planta transformation, etc., have been tried and improved over years. Similarly, various selection systems for retrieval of the transformants have been attempted. The commercial success of maize transformation and transgenic development is unmatched by any other crop so far. Maize transformation with newer gene editing technologies is opening up a fresh dimension in transformation protocols and work-flows. This review captures the various past and recent facets in improvement in maize transformation technologies and attempts to present a comprehensive updated picture of the current state of the art in this area.
Facilitating hydrological data analysis workflows in R: the RHydro package
NASA Astrophysics Data System (ADS)
Buytaert, Wouter; Moulds, Simon; Skoien, Jon; Pebesma, Edzer; Reusser, Dominik
2015-04-01
The advent of new technologies such as web-services and big data analytics holds great promise for hydrological data analysis and simulation. Driven by the need for better water management tools, it allows for the construction of much more complex workflows, that integrate more and potentially more heterogeneous data sources with longer tool chains of algorithms and models. With the scientific challenge of designing the most adequate processing workflow comes the technical challenge of implementing the workflow with a minimal risk for errors. A wide variety of new workbench technologies and other data handling systems are being developed. At the same time, the functionality of available data processing languages such as R and Python is increasing at an accelerating pace. Because of the large diversity of scientific questions and simulation needs in hydrology, it is unlikely that one single optimal method for constructing hydrological data analysis workflows will emerge. Nevertheless, languages such as R and Python are quickly gaining popularity because they combine a wide array of functionality with high flexibility and versatility. The object-oriented nature of high-level data processing languages makes them particularly suited for the handling of complex and potentially large datasets. In this paper, we explore how handling and processing of hydrological data in R can be facilitated further by designing and implementing a set of relevant classes and methods in the experimental R package RHydro. We build upon existing efforts such as the sp and raster packages for spatial data and the spacetime package for spatiotemporal data to define classes for hydrological data (HydroST). In order to handle simulation data from hydrological models conveniently, a HM class is defined. Relevant methods are implemented to allow for an optimal integration of the HM class with existing model fitting and simulation functionality in R. Lastly, we discuss some of the design challenges of the RHydro package, including integration with big data technologies, web technologies, and emerging data models in hydrology.
Game-Based Virtual Worlds as Decentralized Virtual Activity Systems
NASA Astrophysics Data System (ADS)
Scacchi, Walt
There is widespread interest in the development and use of decentralized systems and virtual world environments as possible new places for engaging in collaborative work activities. Similarly, there is widespread interest in stimulating new technological innovations that enable people to come together through social networking, file/media sharing, and networked multi-player computer game play. A decentralized virtual activity system (DVAS) is a networked computer supported work/play system whose elements and social activities can be both virtual and decentralized (Scacchi et al. 2008b). Massively multi-player online games (MMOGs) such as World of Warcraft and online virtual worlds such as Second Life are each popular examples of a DVAS. Furthermore, these systems are beginning to be used for research, deve-lopment, and education activities in different science, technology, and engineering domains (Bainbridge 2007, Bohannon et al. 2009; Rieber 2005; Scacchi and Adams 2007; Shaffer 2006), which are also of interest here. This chapter explores two case studies of DVASs developed at the University of California at Irvine that employ game-based virtual worlds to support collaborative work/play activities in different settings. The settings include those that model and simulate practical or imaginative physical worlds in different domains of science, technology, or engineering through alternative virtual worlds where players/workers engage in different kinds of quests or quest-like workflows (Jakobsson 2006).
Research on a dynamic workflow access control model
NASA Astrophysics Data System (ADS)
Liu, Yiliang; Deng, Jinxia
2007-12-01
In recent years, the access control technology has been researched widely in workflow system, two typical technologies of that are RBAC (Role-Based Access Control) and TBAC (Task-Based Access Control) model, which has been successfully used in the role authorizing and assigning in a certain extent. However, during the process of complicating a system's structure, these two types of technology can not be used in minimizing privileges and separating duties, and they are inapplicable when users have a request of frequently changing on the workflow's process. In order to avoid having these weakness during the applying, a variable flow dynamic role_task_view (briefly as DRTVBAC) of fine-grained access control model is constructed on the basis existed model. During the process of this model applying, an algorithm is constructed to solve users' requirements of application and security needs on fine-grained principle of privileges minimum and principle of dynamic separation of duties. The DRTVBAC model is implemented in the actual system, the figure shows that the task associated with the dynamic management of role and the role assignment is more flexible on authority and recovery, it can be met the principle of least privilege on the role implement of a specific task permission activated; separated the authority from the process of the duties completing in the workflow; prevented sensitive information discovering from concise and dynamic view interface; satisfied with the requirement of the variable task-flow frequently.
NASA Astrophysics Data System (ADS)
Sodickson, Aaron D.
2017-03-01
CT technology has advanced rapidly in recent years, yet not all innovations translate readily into clinical practice. Technology advances must meet certain key requirements to make it into routine use: They must provide a well-defined clinical benefit. They must be easy to use and integrate readily into existing workflows, or better still, further streamline these workflows. These requirements heavily favor fully integrated or automated solutions that remove the human factor and provide a reproducible output independent of operator skill level. Further, to achieve these aims, collaboration with the ultimate end users is needed as early as possible in the development cycle, not just at the point of product testing. Technology innovators are encouraged to engage such collaborators even at early stages of feature or product definition. This manuscript highlights these concepts through exploration of challenging areas in CT imaging in an Emergency Department setting. Technique optimization for pulmonary embolus CT is described as an example of successful integration of multiple advances in radiation dose reduction and imaging speed. The typical workflow of a trauma "pan-scan" (incorporating scans from head through pelvis) is described to highlight workflow challenges and opportunities for improvement. Finally, Dual Energy CT is discussed to highlight the undeniable clinical value of the material characterization it provides, yet also its surprisingly slow integration into routine use beyond early adopters.
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
Towards Dynamic Authentication in the Grid — Secure and Mobile Business Workflows Using GSet
NASA Astrophysics Data System (ADS)
Mangler, Jürgen; Schikuta, Erich; Witzany, Christoph; Jorns, Oliver; Ul Haq, Irfan; Wanek, Helmut
Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement "The Grid is all about sharing." has to be extended by "... and making money out of it!". To allow for the realization of this vision the trust-worthyness of the underlying technology needs to be ensured. This can be achieved by the use of gSET (Gridified Secure Electronic Transaction) as a basic technology for trust management and secure accounting in the presented Grid based workflow. We present a framework, conceptually and technically, from the area of the Mobile-Grid, which justifies the Grid infrastructure as a viable platform to enable commercially successful business workflows.
Internet's impact on publishing
NASA Astrophysics Data System (ADS)
Beretta, Giordano B.
1997-04-01
In 1990, the first monochrome print-on-demand (POD) systems wee successfully brought to market. Subsequent color versions have been less successful, in my view mostly because they require a different workflow than traditional systems and the highly skilled specialists have not been trained. This hypothesis is based on the observation that direct-to-plate systems for short run printing, which do not require a new workflow, are quite successful in the market place. The internet and the World Wide Web are the enabling technologies that are fostering a new print model that is very likely to replace color POD before the latter can establish itself. In this model the consumers locate the material they desire from a contents provider, pay through a digital cash clearinghouse, and print the material at their own cost on their local printer. All the basic technologies for this model are in place; the main challenge is to make the workflow sufficiently robust for individual use.
NASA Astrophysics Data System (ADS)
Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.
2015-11-01
We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
How smart is your BEOL? productivity improvement through intelligent automation
NASA Astrophysics Data System (ADS)
Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony
2017-07-01
The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.
A Brokering Solution for Business Process Execution
NASA Astrophysics Data System (ADS)
Santoro, M.; Bigagli, L.; Roncella, R.; Mazzetti, P.; Nativi, S.
2012-12-01
Predicting the climate change impact on biodiversity and ecosystems, advancing our knowledge of environmental phenomena interconnection, assessing the validity of simulations and other key challenges of Earth Sciences require intensive use of environmental modeling. The complexity of Earth system requires the use of more than one model (often from different disciplines) to represent complex processes. The identification of appropriate mechanisms for reuse, chaining and composition of environmental models is considered a key enabler for an effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. The Group on Earth Observation (GEO) Model Web initiative aims to increase present accessibility and interoperability of environmental models, allowing their flexible composition into complex Business Processes (BPs). A few, basic principles are at the base of the Model Web concept (Nativi, et al.): 1. Open access 2. Minimal entry-barriers 3. Service-driven approach 4. Scalability In this work we propose an architectural solution aiming to contribute to the Model Web vision. This solution applies the Brokering approach for facilitiating complex multidisciplinary interoperability. The Brokering approach is currently adopted in the new GEOSS Common Infrastructure (GCI) as was presented at the last GEO Plenary meeting in Istanbul, November 2011. According to the Brokering principles, the designed system is flexible enough to support the use of multiple BP design (visual) tools, heterogeneous Web interfaces for model execution (e.g. OGC WPS, WSDL, etc.), and different Workflow engines. We designed and prototyped a component called BP Broker that is able to: (i) read an abstract BP, (ii) "compile" the abstract BP into an executable one (eBP) - in this phase the BP Broker might also provide recommendations for incomplete BPs and parameter mismatch resolution - and (iii) finally execute the eBP using a Workflow engine. The present implementation makes use of BPMN 2.0 notation for BP design and jBPM workflow engine for eBP execution; however, the strong decoupling which characterizes the design of the BP Broker easily allows supporting other technologies. The main benefits of the proposed approach are: (i) no need for a composition infrastructure, (ii) alleviation from technicalities of workflow definitions, (iii) support of incomplete BPs, and (iv) the reuse of existing BPs as atomic processes. The BP Broker was designed and prototyped in the EC funded projects EuroGEOSS (http://www.eurogeoss.eu) and UncertWeb (http://www.uncertweb.org); the latter project provided also the use scenarios that were used to test the framework: the eHabitat scenario (calculation habitat similarity likelihood) and the FERA scenario (impact of climate change on land-use and crop yield). Three more scenarios are presently under development. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreements n. 248488 and n. 226487. References Nativi, S., Mazzetti, P., & Geller, G. (2012), "Environmental model access and interoperability: The GEO Model Web initiative". Environmental Modelling & Software , 1-15
Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.
2016-09-01
A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.
NASA Astrophysics Data System (ADS)
Irving, D. H.; Rasheed, M.; Hillman, C.; O'Doherty, N.
2012-12-01
Oilfield management is moving to a more operational footing with near-realtime seismic and sensor monitoring governing drilling, fluid injection and hydrocarbon extraction workflows within safety, productivity and profitability constraints. To date, the geoscientific analytical architectures employed are configured for large volumes of data, computational power or analytical latency and compromises in system design must be made to achieve all three aspects. These challenges are encapsulated by the phrase 'Big Data' which has been employed for over a decade in the IT industry to describe the challenges presented by data sets that are too large, volatile and diverse for existing computational architectures and paradigms. We present a data-centric architecture developed to support a geoscientific and geotechnical workflow whereby: ●scientific insight is continuously applied to fresh data ●insights and derived information are incorporated into engineering and operational decisions ●data governance and provenance are routine within a broader data management framework Strategic decision support systems in large infrastructure projects such as oilfields are typically relational data environments; data modelling is pervasive across analytical functions. However, subsurface data and models are typically non-relational (i.e. file-based) in the form of large volumes of seismic imaging data or rapid streams of sensor feeds and are analysed and interpreted using niche applications. The key architectural challenge is to move data and insight from a non-relational to a relational, or structured, data environment for faster and more integrated analytics. We describe how a blend of MapReduce and relational database technologies can be applied in geoscientific decision support, and the strengths and weaknesses of each in such an analytical ecosystem. In addition we discuss hybrid technologies that use aspects of both and translational technologies for moving data and analytics across these platforms. Moving to a data-centric architecture requires data management methodologies to be overhauled by default and we show how end-to-end data provenancing and dependency management is implicit in such an environment and how it benefits system administration as well as the user community. Whilst the architectural experiences are drawn from the oil industry, we believe that they are more broadly applicable in academic and government settings where large volumes of data are added to incrementally and require revisiting with low analytical latency and we suggest application to earthquake monitoring and remote sensing networks.
NASA Astrophysics Data System (ADS)
Patra, A. K.; Valentine, G. A.; Bursik, M. I.; Connor, C.; Connor, L.; Jones, M.; Simakov, N.; Aghakhani, H.; Jones-Ivey, R.; Kosar, T.; Zhang, B.
2015-12-01
Over the last 5 years we have created a community collaboratory Vhub.org [Palma et al, J. App. Volc. 3:2 doi:10.1186/2191-5040-3-2] as a place to find volcanology-related resources, and a venue for users to disseminate tools, teaching resources, data, and an online platform to support collaborative efforts. As the community (current active users > 6000 from an estimated community of comparable size) embeds the tools in the collaboratory into educational and research workflows it became imperative to: a) redesign tools into robust, open source reusable software for online and offline usage/enhancement; b) share large datasets with remote collaborators and other users seamlessly with security; c) support complex workflows for uncertainty analysis, validation and verification and data assimilation with large data. The focus on tool development/redevelopment has been twofold - firstly to use best practices in software engineering and new hardware like multi-core and graphic processing units. Secondly we wish to enhance capabilities to support inverse modeling, uncertainty quantification using large ensembles and design of experiments, calibration, validation. Among software engineering practices we practice are open source facilitating community contributions, modularity and reusability. Our initial targets are four popular tools on Vhub - TITAN2D, TEPHRA2, PUFF and LAVA. Use of tools like these requires many observation driven data sets e.g. digital elevation models of topography, satellite imagery, field observations on deposits etc. These data are often maintained in private repositories that are privately shared by "sneaker-net". As a partial solution to this we tested mechanisms using irods software for online sharing of private data with public metadata and access limits. Finally, we adapted use of workflow engines (e.g. Pegasus) to support the complex data and computing workflows needed for usage like uncertainty quantification for hazard analysis using physical models.
Building a Semantic Framework for eScience
NASA Astrophysics Data System (ADS)
Movva, S.; Ramachandran, R.; Maskey, M.; Li, X.
2009-12-01
The e-Science vision focuses on the use of advanced computing technologies to support scientists. Recent research efforts in this area have focused primarily on “enabling” use of infrastructure resources for both data and computational access especially in Geosciences. One of the existing gaps in the existing e-Science efforts has been the failure to incorporate stable semantic technologies within the design process itself. In this presentation, we describe our effort in designing a framework for e-Science built using Service Oriented Architecture. Our framework provides users capabilities to create science workflows and mine distributed data. Our e-Science framework is being designed around a mass market tool to promote reusability across many projects. Semantics is an integral part of this framework and our design goal is to leverage the latest stable semantic technologies. The use of these stable semantic technologies will provide the users of our framework the useful features such as: allow search engines to find their content with RDFa tags; create RDF triple data store for their content; create RDF end points to share with others; and semantically mash their content with other online content available as RDF end point.
NASA Technical Reports Server (NTRS)
Ryan, Harry; Junell, Justin; Albasini, Colby; O'Rourke, William; Le, Thang; Strain, Ted; Stiglets, Tim
2011-01-01
A package for the automation of the Engineering Analysis (EA) process at the Stennis Space Center has been customized. It provides the ability to assign and track analysis tasks electronically, and electronically route a task for approval. It now provides a mechanism to keep these analyses under configuration management. It also allows the analysis to be stored and linked to the engineering data that is needed to perform the analysis (drawings, etc.). PTC s (Parametric Technology Corp o ration) Windchill product was customized to allow the EA to be created, routed, and maintained under configuration management. Using Infoengine Tasks, JSP (JavaServer Pages), Javascript, a user interface was created within the Windchill product that allows users to create EAs. Not only does this interface allow users to create and track EAs, but it plugs directly into the out-ofthe- box ability to associate these analyses with other relevant engineering data such as drawings. Also, using the Windchill workflow tool, the Design and Data Management System (DDMS) team created an electronic routing process based on the manual/informal approval process. The team also added the ability for users to notify and track notifications to individuals about the EA. Prior to the Engineering Analysis creation, there was no electronic way of creating and tracking these analyses. There was also a feature that was added that would allow users to track/log e-mail notifications of the EA.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.
2006-03-01
One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.
Automated lattice data generation
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.
2018-03-01
The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.
Overcoming Barriers to Technology Adoption in Small Manufacturing Enterprises (SMEs)
2003-06-01
automates quote-generation, order - processing workflow management, perform- ance analysis, and accounting functions. Ultimately, it will enable Magdic...that Magdic imple- ment an MES instead. The MES, in addition to solving the problem of document manage- ment, would automate quote-generation, order ... processing , workflow management, perform- ance analysis, and accounting functions. To help Magdic personnel learn about the MES, TIDE personnel provided
Incorporating Brokers within Collaboration Environments
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; de Torcy, A.
2013-12-01
A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.
NASA Astrophysics Data System (ADS)
Tomlin, M. C.; Jenkyns, R.
2015-12-01
Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.
Worklist handling in workflow-enabled radiological application systems
NASA Astrophysics Data System (ADS)
Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens
2000-05-01
For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
NASA Astrophysics Data System (ADS)
Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes
2002-05-01
If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.
Development of the workflow kine systems for support on KAIZEN.
Mizuno, Yuki; Ito, Toshihiko; Yoshikawa, Toru; Yomogida, Satoshi; Morio, Koji; Sakai, Kazuhiro
2012-01-01
In this paper, we introduce the new workflow line system consisted of the location and image recording, which led to the acquisition of workflow information and the analysis display. From the results of workflow line investigation, we considered the anticipated effects and the problems on KAIZEN. Workflow line information included the location information and action contents information. These technologies suggest the viewpoints to help improvement, for example, exclusion of useless movement, the redesign of layout and the review of work procedure. Manufacturing factory, it was clear that there was much movement from the standard operation place and accumulation residence time. The following was shown as a result of this investigation, to be concrete, the efficient layout was suggested by this system. In the case of the hospital, similarly, it is pointed out that the workflow has the problem of layout and setup operations based on the effective movement pattern of the experts. This system could adapt to routine work, including as well as non-routine work. By the development of this system which can fit and adapt to industrial diversification, more effective "visual management" (visualization of work) is expected in the future.
ESO Reflex: A Graphical Workflow Engine for Data Reduction
NASA Astrophysics Data System (ADS)
Hook, R.; Romaniello, M.; Péron, M.; Ballester, P.; Gabasch, A.; Izzo, C.; Ullgrén, M.; Maisala, S.; Oittinen, T.; Solin, O.; Savolainen, V.; Järveläinen, P.; Tyynelä, J.
2008-08-01
Sampo {http://www.eso.org/sampo} (Hook et al. 2005) is a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal is to assess the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Those prototypes will not only be used to validate concepts and understand requirements but will also be tools of immediate value for the community. Most of the raw data produced by ESO instruments can be reduced using CPL {http://www.eso.org/cpl} recipes: compiled C programs following an ESO standard and utilizing routines provided by the Common Pipeline Library. Currently reduction recipes are run in batch mode as part of the data flow system to generate the input to the ESO VLT/VLTI quality control process and are also made public for external users. Sampo has developed a prototype application called ESO Reflex {http://www.eso.org/sampo/reflex/} that integrates a graphical user interface and existing data reduction algorithms. ESO Reflex can invoke CPL-based recipes in a flexible way through a dedicated interface. ESO Reflex is based on the graphical workflow engine Taverna {http://taverna.sourceforge.net} that was originally developed by the UK eScience community, mostly for work in the life sciences. Workflows have been created so far for three VLT/VLTI instrument modes ( VIMOS/IFU {http://www.eso.org/instruments/vimos/}, FORS spectroscopy {http://www.eso.org/instruments/fors/} and AMBER {http://www.eso.org/instruments/amber/}), and the easy-to-use GUI allows the user to make changes to these or create workflows of their own. Python scripts and IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available.
Rapid Energy Modeling Workflow Demonstration Project
2014-01-01
Conditioning Engineers BIM Building Information Model BLCC building life cycle costs BPA Building Performance Analysis CAD computer assisted...invited to enroll in the Autodesk Building Performance Analysis ( BPA ) Certificate Program under a group 30 specifically for DoD installation
Automated metadata--final project report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schissel, David
This report summarizes the work of the Automated Metadata, Provenance Cataloging, and Navigable Interfaces: Ensuring the Usefulness of Extreme-Scale Data Project (MPO Project) funded by the United States Department of Energy (DOE), Offices of Advanced Scientific Computing Research and Fusion Energy Sciences. Initially funded for three years starting in 2012, it was extended for 6 months with additional funding. The project was a collaboration between scientists at General Atomics, Lawrence Berkley National Laboratory (LBNL), and Massachusetts Institute of Technology (MIT). The group leveraged existing computer science technology where possible, and extended or created new capabilities where required. The MPO projectmore » was able to successfully create a suite of software tools that can be used by a scientific community to automatically document their scientific workflows. These tools were integrated into workflows for fusion energy and climate research illustrating the general applicability of the project’s toolkit. Feedback was very positive on the project’s toolkit and the value of such automatic workflow documentation to the scientific endeavor.« less
Metabolic modelling in the development of cell factories by synthetic biology
Jouhten, Paula
2012-01-01
Cell factories are commonly microbial organisms utilized for bioconversion of renewable resources to bulk or high value chemicals. Introduction of novel production pathways in chassis strains is the core of the development of cell factories by synthetic biology. Synthetic biology aims to create novel biological functions and systems not found in nature by combining biology with engineering. The workflow of the development of novel cell factories with synthetic biology is ideally linear which will be attainable with the quantitative engineering approach, high-quality predictive models, and libraries of well-characterized parts. Different types of metabolic models, mathematical representations of metabolism and its components, enzymes and metabolites, are useful in particular phases of the synthetic biology workflow. In this minireview, the role of metabolic modelling in synthetic biology will be discussed with a review of current status of compatible methods and models for the in silico design and quantitative evaluation of a cell factory. PMID:24688669
Automatic Earth observation data service based on reusable geo-processing workflow
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min
2008-12-01
A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.
Yeung, Daniel; Boes, Peter; Ho, Meng Wei; Li, Zuofeng
2015-05-08
Image-guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X-rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post-treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state-of-the-art Web technologies, a domain model closely matching the workflow, a database-supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model-View-Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client-side technologies, such as jQuery, jQuery Plug-ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process.
A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Michelle M.; Wu, Chase Q.
2013-11-07
Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization formore » this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.« less
Climate Data Analytics Workflow Management
NASA Astrophysics Data System (ADS)
Zhang, J.; Lee, S.; Pan, L.; Mattmann, C. A.; Lee, T. J.
2016-12-01
In this project we aim to pave a novel path to create a sustainable building block toward Earth science big data analytics and knowledge sharing. Closely studying how Earth scientists conduct data analytics research in their daily work, we have developed a provenance model to record their activities, and to develop a technology to automatically generate workflows for scientists from the provenance. On top of it, we have built the prototype of a data-centric provenance repository, and establish a PDSW (People, Data, Service, Workflow) knowledge network to support workflow recommendation. To ensure the scalability and performance of the expected recommendation system, we have leveraged the Apache OODT system technology. The community-approved, metrics-based performance evaluation web-service will allow a user to select a metric from the list of several community-approved metrics and to evaluate model performance using the metric as well as the reference dataset. This service will facilitate the use of reference datasets that are generated in support of the model-data intercomparison projects such as Obs4MIPs and Ana4MIPs. The data-centric repository infrastructure will allow us to catch richer provenance to further facilitate knowledge sharing and scientific collaboration in the Earth science community. This project is part of Apache incubator CMDA project.
A Mixed-Methods Research Framework for Healthcare Process Improvement.
Bastian, Nathaniel D; Munoz, David; Ventura, Marta
2016-01-01
The healthcare system in the United States is spiraling out of control due to ever-increasing costs without significant improvements in quality, access to care, satisfaction, and efficiency. Efficient workflow is paramount to improving healthcare value while maintaining the utmost standards of patient care and provider satisfaction in high stress environments. This article provides healthcare managers and quality engineers with a practical healthcare process improvement framework to assess, measure and improve clinical workflow processes. The proposed mixed-methods research framework integrates qualitative and quantitative tools to foster the improvement of processes and workflow in a systematic way. The framework consists of three distinct phases: 1) stakeholder analysis, 2a) survey design, 2b) time-motion study, and 3) process improvement. The proposed framework is applied to the pediatric intensive care unit of the Penn State Hershey Children's Hospital. The implementation of this methodology led to identification and categorization of different workflow tasks and activities into both value-added and non-value added in an effort to provide more valuable and higher quality patient care. Based upon the lessons learned from the case study, the three-phase methodology provides a better, broader, leaner, and holistic assessment of clinical workflow. The proposed framework can be implemented in various healthcare settings to support continuous improvement efforts in which complexity is a daily element that impacts workflow. We proffer a general methodology for process improvement in a healthcare setting, providing decision makers and stakeholders with a useful framework to help their organizations improve efficiency. Published by Elsevier Inc.
Spreter Von Kreudenstein, Thomas; Lario, Paula I; Dixit, Surjit B
2014-01-01
Computational and structure guided methods can make significant contributions to the development of solutions for difficult protein engineering problems, including the optimization of next generation of engineered antibodies. In this paper, we describe a contemporary industrial antibody engineering program, based on hypothesis-driven in silico protein optimization method. The foundational concepts and methods of computational protein engineering are discussed, and an example of a computational modeling and structure-guided protein engineering workflow is provided for the design of best-in-class heterodimeric Fc with high purity and favorable biophysical properties. We present the engineering rationale as well as structural and functional characterization data on these engineered designs. Copyright © 2013 Elsevier Inc. All rights reserved.
Integrating UAV Flight outputs in Esri's CityEngine for semi-urban areas
NASA Astrophysics Data System (ADS)
Anca, Paula; Vasile, Alexandru; Sandric, Ionut
2016-04-01
One of the most pervasive technologies of recent years, which has crossed over into consumer products due to its lowering prince, is the UAV, commonly known as drones. Besides its ever-more accessible prices and growing functionality, what is truly impressive is the drastic reduction in processing time, from days to ours: from the initial flight preparation to the final output. This paper presents such a workflow and goes further by integrating the outputs into another growing technology: 3D. The software used for this purpose is Esri's CityEngine, which was developed for modeling 3D urban environments using existing 2D GIS data and computer generated architecture (CGA) rules, instead of modeling each feature individually. A semi-urban areas was selected for this study and captured using the E-Bee from Parrot. The output point cloud elevation from the E-Bee flight was transformed into a raster in order to be used as an elevation surface in CityEngine, and the mosaic raster dataset was draped over this surface. In order to model the buildings in this area CGA rules were written using the building footprints, as inputs, in the form of Feature Classes. The extrusion heights for the buildings were also extracted from the point cloud, and realistic textures were draped over the 3D building models. Finally the scene was shared as a 3D web-scene which can be accessed by anyone through a link, without any software besides an internet browser. This can serve as input for Smart City development through further analysis for urban ecology Keywords: 3D, drone, CityEngine, E-Bee, Esri, scene, web-scene
Stahl, James E; McGowan, Heather; DiResta, Ellen; Gaydos, Charlotte A.; Klapperich, Catherine; Parrish, John; Korte, Brenda
2015-01-01
The first part of this manuscript is an introduction to systems engineering and how it may be applied to health care and point of care testing (POCT). Systems engineering is an interdisciplinary field that seeks to better understand and manage changes in complex systems and projects as whole. Systems are sets of interconnected elements which interact with each other, are dynamic, change over time and are subject to complex behaviors. The second part of this paper reports on the results of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) workshop exploring the future of point of care testing and technologies and the recognition that these new technologies do not exist in isolation. That they exist within ecosystems of other technologies and systems; and these systems influence their likelihood of success or failure and their effectiveness. In this workshop, a diverse group of individuals from around the country, from disciplines ranging from clinical care, engineering, regulatory affairs and many others to members of the three major National Institutes of Health (NIH) funded efforts in the areas the Centers for POCT for sexually transmitted disease, POCT for the future of Cancer Care, POCT primary care research network, gathered together for a modified deep dive workshop exploring the current state of the art, mapping probable future directions and developing longer term goals. The invitees were broken up into 4 thematic groups: Home, Outpatient, Public/shared space and Rural/global. Each group proceeded to explore the problem and solution space for point of care tests and technology within their theme. While each thematic area had specific challenges, many commonalities also emerged. This effort thus helped create a conceptual framework for POCT as well as identifying many of the challenges for POCT going forward. Four main dimensions were identified as defining the functional space for both point of care testing and treatment, these are: Time, Location, Interpretation and Tempo. A framework is presented in this paper. There were several current and future challenges identified through the workshop. These broadly fall into the categories of technology development and implementation. More specifically these are in the areas of: 1) Design, 2) Patient driven demand and technology, 3) Information Characteristics and Presentation, 4) Health Information Systems, 5) Connectivity, 6) Workflow and implementation, 7) Maintenance/Cost, and 8) Quality Control. Definitions of these challenge areas and recommendations to address them are provided. PMID:25750593
Stahl, James E; McGowan, Heather; DiResta, Ellen; Gaydos, Charlotte A; Klapperich, Catherine; Parrish, John; Korte, Brenda
2015-03-01
The first part of this manuscript is an introduction to systems engineering and how it may be applied to health care and point of care testing (POCT). Systems engineering is an interdisciplinary field that seeks to better understand and manage changes in complex systems and projects as whole. Systems are sets of interconnected elements which interact with each other, are dynamic, change over time and are subject to complex behaviors. The second part of this paper reports on the results of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) workshop exploring the future of point of care testing and technologies and the recognition that these new technologies do not exist in isolation. That they exist within ecosystems of other technologies and systems; and these systems influence their likelihood of success or failure and their effectiveness. In this workshop, a diverse group of individuals from around the country, from disciplines ranging from clinical care, engineering, regulatory affairs and many others to members of the three major National Institutes of Health (NIH) funded efforts in the areas the Centers for POCT for sexually transmitted disease, POCT for the future of Cancer Care, POCT primary care research network, gathered together for a modified deep dive workshop exploring the current state of the art, mapping probable future directions and developing longer term goals. The invitees were broken up into 4 thematic groups: Home, Outpatient, Public/shared space and Rural/global. Each group proceeded to explore the problem and solution space for point of care tests and technology within their theme. While each thematic area had specific challenges, many commonalities also emerged. This effort thus helped create a conceptual framework for POCT as well as identifying many of the challenges for POCT going forward. Four main dimensions were identified as defining the functional space for both point of care testing and treatment, these are: Time, Location, Interpretation and Tempo. A framework is presented in this paper. There were several current and future challenges identified through the workshop. These broadly fall into the categories of technology development and implementation. More specifically these are in the areas of: 1) Design, 2) Patient driven demand and technology, 3) Information Characteristics and Presentation, 4) Health Information Systems, 5) Connectivity, 6) Workflow and implementation, 7) Maintenance/Cost, and 8) Quality Control. Definitions of these challenge areas and recommendations to address them are provided.
The View from a Few Hundred Feet : A New Transparent and Integrated Workflow for UAV-collected Data
NASA Astrophysics Data System (ADS)
Peterson, F. S.; Barbieri, L.; Wyngaard, J.
2015-12-01
Unmanned Aerial Vehicles (UAVs) allow scientists and civilians to monitor earth and atmospheric conditions in remote locations. To keep up with the rapid evolution of UAV technology, data workflows must also be flexible, integrated, and introspective. Here, we present our data workflow for a project to assess the feasibility of detecting threshold levels of methane, carbon-dioxide, and other aerosols by mounting consumer-grade gas analysis sensors on UAV's. Particularly, we highlight our use of Project Jupyter, a set of open-source software tools and documentation designed for developing "collaborative narratives" around scientific workflows. By embracing the GitHub-backed, multi-language systems available in Project Jupyter, we enable interaction and exploratory computation while simultaneously embracing distributed version control. Additionally, the transparency of this method builds trust with civilians and decision-makers and leverages collaboration and communication to resolve problems. The goal of this presentation is to provide a generic data workflow for scientific inquiries involving UAVs and to invite the participation of the AGU community in its improvement and curation.
Improving medical imaging report turnaround times: the role of technolgy.
Marquez, Luis O; Stewart, Howard
2005-01-01
At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department.
Survey of UK dentists regarding the use of CAD/CAM technology.
Tran, D; Nesbit, M; Petridis, H
2016-11-18
Statement of the problem Digital workflows (CAD/CAM) have been introduced in dentistry during recent years. No published information exists on dentists' use and reporting of this technology.Purpose The purpose of this survey was to identify the infiltration of CAD/CAM technology in UK dental practices and to investigate the relationship of various demographic factors to the answers regarding use or non-use of this technology.Materials and methods One thousand and thirty-one online surveys were sent to a sample of UK dentists composing of both users and non-users of CAD/CAM. It aimed to reveal information regarding type of usage, materials, perceived benefits, barriers to access, and disadvantages of CAD/CAM dentistry. Statistical analysis was undertaken to test the influence of various demographic variables such as country of work, dentist experience, level of training and type of work (NHS or private).Results The number of completed responses totalled 385. Most of the respondents did not use any part of a digital workflow, and the main barriers to CAD/CAM use were initial costs and a lack of perceived benefit over conventional methods. Dentists delivering mostly private work were most likely to have adopted CAD/CAM technology (P <0.001). Further training also correlated with a greater likelihood of CAD/CAM usage (P <0.001). Most users felt that the technology had led to a change in the use of dental materials, leading to increased use of, for example, zirconia and lithium disilicate. Most users were trained either by companies or self-trained, and a third felt that their training was insufficient. The majority of respondents (89%) felt that CAD/CAM had a big role to play in the future.Conclusion Most of the respondents did not use any part of a digital workflow. However, the majority of surveyed dentists were interested in incorporating CAD/CAM into their workflow, while most believed that it will have a big role in the future. There are still some concerns from dentists about the quality of chairside CAD/CAM restorations while the costs are still in the main hugely prohibitive (especially for NHS dentistry).
[Application and prospect of digital technology in the field of orthodontics].
Zhou, Y H
2016-06-01
The three-dimensional(3D)digital technology has brought a revolutionary change in diagnostic planning and treatment strategy of orthodontics. Acquisition of 3D image data of the hard and soft tissues of the patients, diagnostic analysis and treatment prediction, and ultimately the individualized orthodontic appliance, will become the development trend and workflow of the 3D orthodontics. With the development of 3D digital technology, the traditional plaster model has been gradually replacing by 3D digital models. Meanwhile, 3D facial soft tissue scan and cone-beam CT scan have been gradually applied to clinical orthodontics, making it possible to get 3D virtual anatomical structure for patients. With the help of digital technology, the diagnostic process is much easier for orthodontist. However how to command the whole digital workflow and put it into practice in the daily work is still a long way to go. The purpose of this article is to enlighten the orthodontists interested in digital technology and discuss the future of digital orthodontics in China.
Web-Accessible Scientific Workflow System for Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roelof Versteeg; Roelof Versteeg; Trevor Rowe
2006-03-01
We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less
How to Take HRMS Process Management to the Next Level with Workflow Business Event System
NASA Technical Reports Server (NTRS)
Rajeshuni, Sarala; Yagubian, Aram; Kunamaneni, Krishna
2006-01-01
Oracle Workflow with the Business Event System offers a complete process management solution for enterprises to manage business processes cost-effectively. Using Workflow event messaging, event subscriptions, AQ Servlet and advanced queuing technologies, this presentation will demonstrate the step-by-step design and implementation of system solutions in order to integrate two dissimilar systems and establish communication remotely. As a case study, the presentation walks you through the process of propagating organization name changes in other applications that originated from the HRMS module without changing applications code. The solution can be applied to your particular business cases for streamlining or modifying business processes across Oracle and non-Oracle applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapur, A.
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less
Workflow opportunities using JPEG 2000
NASA Astrophysics Data System (ADS)
Foshee, Scott
2002-11-01
JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.
Structured recording of intraoperative surgical workflows
NASA Astrophysics Data System (ADS)
Neumuth, T.; Durstewitz, N.; Fischer, M.; Strauss, G.; Dietz, A.; Meixensberger, J.; Jannin, P.; Cleary, K.; Lemke, H. U.; Burgert, O.
2006-03-01
Surgical Workflows are used for the methodical and scientific analysis of surgical interventions. The approach described here is a step towards developing surgical assist systems based on Surgical Workflows and integrated control systems for the operating room of the future. This paper describes concepts and technologies for the acquisition of Surgical Workflows by monitoring surgical interventions and their presentation. Establishing systems which support the Surgical Workflow in operating rooms requires a multi-staged development process beginning with the description of these workflows. A formalized description of surgical interventions is needed to create a Surgical Workflow. This description can be used to analyze and evaluate surgical interventions in detail. We discuss the subdivision of surgical interventions into work steps regarding different levels of granularity and propose a recording scheme for the acquisition of manual surgical work steps from running interventions. To support the recording process during the intervention, we introduce a new software architecture. Core of the architecture is our Surgical Workflow editor that is intended to deal with the manifold, complex and concurrent relations during an intervention. Furthermore, a method for an automatic generation of graphs is shown which is able to display the recorded surgical work steps of the interventions. Finally we conclude with considerations about extensions of our recording scheme to close the gap to S-PACS systems. The approach was used to record 83 surgical interventions from 6 intervention types from 3 different surgical disciplines: ENT surgery, neurosurgery and interventional radiology. The interventions were recorded at the University Hospital Leipzig, Germany and at the Georgetown University Hospital, Washington, D.C., USA.
Turewicz, Michael; Kohl, Michael; Ahrens, Maike; Mayer, Gerhard; Uszkoreit, Julian; Naboulsi, Wael; Bracht, Thilo; Megger, Dominik A; Sitek, Barbara; Marcus, Katrin; Eisenacher, Martin
2017-11-10
The analysis of high-throughput mass spectrometry-based proteomics data must address the specific challenges of this technology. To this end, the comprehensive proteomics workflow offered by the de.NBI service center BioInfra.Prot provides indispensable components for the computational and statistical analysis of this kind of data. These components include tools and methods for spectrum identification and protein inference, protein quantification, expression analysis as well as data standardization and data publication. All particular methods of the workflow which address these tasks are state-of-the-art or cutting edge. As has been shown in previous publications, each of these methods is adequate to solve its specific task and gives competitive results. However, the methods included in the workflow are continuously reviewed, updated and improved to adapt to new scientific developments. All of these particular components and methods are available as stand-alone BioInfra.Prot services or as a complete workflow. Since BioInfra.Prot provides manifold fast communication channels to get access to all components of the workflow (e.g., via the BioInfra.Prot ticket system: bioinfraprot@rub.de) users can easily benefit from this service and get support by experts. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ammann, C. M.; Vigh, J. L.; Lee, J. A.
2016-12-01
Society's growing needs for robust and relevant climate information have fostered an explosion in tools and frameworks for processing climate projections. Many top-down workflows might be employed to generate sets of pre-computed data and plots, frequently served in a "loading-dock style" through a metadata-enabled search and discovery engine. Despite these increasing resources, the diverse needs of applications-driven projects often result in data processing workflow requirements that cannot be fully satisfied using past approaches. In parallel to the data processing challenges, the provision of climate information to users in a form that is also usable represents a formidable challenge of its own. Finally, many users do not have the time nor the desire to synthesize and distill massive volumes of climate information to find the relevant information for their particular application. All of these considerations call for new approaches to developing actionable climate information. CRMe seeks to bridge the gap between the diversity and richness of bottom-up needs of practitioners, with discrete, structured top-down workflows typically implemented for rapid delivery. Additionally, CRMe has implemented web-based data services capable of providing focused climate information in usable form for a given location, or as spatially aggregated information for entire regions or countries following the needs of users and sectors. Making climate data actionable also involves summarizing and presenting it in concise and approachable ways. CRMe is developing the concept of dashboards, co-developed with the users, to condense the key information into a quick summary of the most relevant, curated climate data for a given discipline, application, or location, while still enabling users to efficiently conduct deeper discovery into rich datasets on an as-needed basis.
Architecture for the Interdisciplinary Earth Data Alliance
NASA Astrophysics Data System (ADS)
Richard, S. M.
2016-12-01
The Interdisciplinary Earth Data Alliance (IEDA) is leading an EarthCube (EC) Integrative Activity to develop a governance structure and technology framework that enables partner data systems to share technology, infrastructure, and practice for documenting, curating, and accessing heterogeneous geoscience data. The IEDA data facility provides capabilities in an extensible framework that enables domain-specific requirements for each partner system in the Alliance to be integrated into standardized cross-domain workflows. The shared technology infrastructure includes a data submission hub, a domain-agnostic file-based repository, an integrated Alliance catalog and a Data Browser for data discovery across all partner holdings, as well as services for registering identifiers for datasets (DOI) and samples (IGSN). The submission hub will be a platform that facilitates acquisition of cross-domain resource documentation and channels users into domain and resource-specific workflows tailored for each partner community. We are exploring an event-based message bus architecture with a standardized plug-in interface for adding capabilities. This architecture builds on the EC CINERGI metadata pipeline as well as the message-based architecture of the SEAD project. Plug-in components for file introspection to match entities to a data type registry (extending EC Digital Crust and Research Data Alliance work), extract standardized keywords (using CINERGI components), location, cruise, personnel and other metadata linkage information (building on GeoLink and existing IEDA partner components). The submission hub will feed submissions to appropriate partner repositories and service endpoints targeted by domain and resource type for distribution. The Alliance governance will adopt patterns (vocabularies, operations, resource types) for self-describing data services using standard HTTP protocol for simplified data access (building on EC GeoWS and other `RESTful' approaches). Exposure of resource descriptions (datasets and service distributions) for harvesting by commercial search engines as well as geoscience-data focused crawlers (like EC B-Cube crawler) will increase discoverability of IEDA resources with minimal effort by curators.
9th Annual Systems Engineering Conference: Volume 4 Thursday
2006-10-26
Connectivity, Speed, Volume • Enterprise application integration • Workflow integration or multi-media • Federated search capability • Link analysis and...categorization, federated search & automated discovery of information — Collaborative tools to quickly share relevant information Built on commercial
Vishwanath, Arun; Singh, Sandeep Rajan; Winkelstein, Peter
2010-11-01
The promise of the electronic medical record (EMR) lies in its ability to reduce the costs of health care delivery and improve the overall quality of care--a promise that is realized through major changes in workflows within the health care organization. Yet little systematic information exists about the workflow effects of EMRs. Moreover, some of the research to-date points to reduced satisfaction among physicians after implementation of the EMR and increased time, i.e., negative workflow effects. A better understanding of the impact of the EMR on workflows is, hence, vital to understanding what the technology really does offer that is new and unique. (i) To empirically develop a physician centric conceptual model of the workflow effects of EMRs; (ii) To use the model to understand the antecedents to the physicians' workflow expectation from the new EMR; (iii) To track physicians' satisfaction overtime, 3 months and 20 months after implementation of the EMR; (iv) To explore the impact of technology learning curves on physicians' reported satisfaction levels. The current research uses the mixed-method technique of concept mapping to empirically develop the conceptual model of an EMR's workflow effects. The model is then used within a controlled study to track physician expectations from a new EMR system as well as their assessments of the EMR's performance 3 months and 20 months after implementation. The research tracks the actual implementation of a new EMR within the outpatient clinics of a large northeastern research hospital. The pre-implementation survey netted 20 physician responses; post-implementation Time 1 survey netted 22 responses, and Time 2 survey netted 26 physician responses. The implementation of the actual EMR served as the intervention. Since the study was conducted within the same setting and tracked a homogenous group of respondents, the overall study design ensured against extraneous influences on the results. Outcome measures were derived empirically from the conceptual model. They included 85 items that measured physician perceptions of the EMR's workflow effect on the following eight issues: (1) administration, (2) efficiency in patient processing, (3) basic clinical processes, (4) documentation of patient encounter, (5) economic challenges and reimbursement, (6) technical issues, (7) patient safety and care, and (8) communication and confidentiality. The items were used to track expectations prior to implementation and they served as retrospective measures of satisfaction with the EMR in post-implementation Time 1 and Time 2. The findings suggest that physicians conceptualize EMRs as an incremental extension of older computerized provider order entries (CPOEs) rather than as a new innovation. The EMRs major functional advantages are seen to be very similar to, if not the same as, those of CPOEs. Technology learning curves play a statistically significant though minor role in shaping physician perceptions. The physicians' expectations from the EMR are based on their prior beliefs rather than on a rational evaluation of the EMR's fit, functionality, or performance. Their decision regarding the usefulness of the EMR is made very early, within the first few months of use of the EMR. These early perceptions then remain stable and become the lens through which subsequent experience with the EMR is interpreted. The findings suggest a need for communication based interventions aimed at explaining the value, fit, and usefulness of EMRs to physicians early in the pre- and immediate post-EMR implementation stages. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Hayes, Gillian R; Lee, Charlotte P; Dourish, Paul
2011-08-01
The purpose of this paper is to demonstrate how current visual representations of organizational and technological processes do not fully account for the variability present in everyday practices. We further demonstrate how narrative networks can augment these representations to indicate potential areas for successful or problematic adoption of new technologies and potential needs for additional training. We conducted a qualitative study of the processes and routines at a major academic medical center slated to be supported by the development and installation of a new comprehensive HIT system. We used qualitative data collection techniques including observations of the activities to be supported by the new system and interviews with department heads, researchers, and both clinical and non-clinical staff. We conducted a narrative network analysis of these data by choosing exemplar processes to be modeled, selecting and analyzing narrative fragments, and developing visual representations of the interconnection of these narratives. Narrative networks enable us to view the variety of ways work has been and can be performed in practice, influencing our ability to design for innovation in use. Narrative networks are a means for analyzing and visualizing organizational routines in concert with more traditional requirements engineering, workflow modeling, and quality improvement outcome measurement. This type of analysis can support a deeper and more nuanced understanding of how and why certain routines continue to exist, change, or stop entirely. At the same time, it can illuminate areas in which adoption may be slow, more training or communication may be needed, and routines preferred by the leadership are subverted by routines preferred by the staff. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Technology platform development for targeted plasma metabolites in human heart failure.
Chan, Cy X'avia; Khan, Anjum A; Choi, Jh Howard; Ng, Cm Dominic; Cadeiras, Martin; Deng, Mario; Ping, Peipei
2013-01-01
Heart failure is a multifactorial disease associated with staggeringly high morbidity and motility. Recently, alterations of multiple metabolites have been implicated in heart failure; however, the lack of an effective technology platform to assess these metabolites has limited our understanding on how they contribute to this disease phenotype. We have successfully developed a new workflow combining specific sample preparation with tandem mass spectrometry that enables us to extract most of the targeted metabolites. 19 metabolites were chosen ascribing to their biological relevance to heart failure, including extracellular matrix remodeling, inflammation, insulin resistance, renal dysfunction, and cardioprotection against ischemic injury. In this report, we systematically engineered, optimized and refined a protocol applicable to human plasma samples; this study contributes to the methodology development with respect to deproteinization, incubation, reconstitution, and detection with mass spectrometry. The deproteinization step was optimized with 20% methanol/ethanol at a plasma:solvent ratio of 1:3. Subsequently, an incubation step was implemented which remarkably enhanced the metabolite signals and the number of metabolite peaks detected by mass spectrometry in both positive and negative modes. With respect to the step of reconstitution, 0.1% formic acid was designated as the reconstitution solvent vs. 6.5 mM ammonium bicarbonate, based on the comparable number of metabolite peaks detected in both solvents, and yet the signal detected in the former was higher. By adapting this finalized protocol, we were able to retrieve 13 out of 19 targeted metabolites from human plasma. We have successfully devised a simple albeit effective workflow for the targeted plasma metabolites relevant to human heart failure. This will be employed in tandem with high throughput liquid chromatography mass spectrometry platform to validate and characterize these potential metabolic biomarkers for diagnostic and therapeutic development of heart failure patients.
Some Aspects in Photogrammetry Education at the Department of Geodesy and Cadastre of the VGTU
NASA Astrophysics Data System (ADS)
Ruzgienė, Birutė
2008-03-01
The education in photogrammetry is very important when applying photogrammetric methods for the terrain mapping purposes, for spatial data modelling, solving engineering tasks, measuring of architectural monuments etc. During the time the traditional photogrammetric technologies have been changing to modern fully digital photogrammetric workflow. The number of potential users of the photogrammetric methods tends to increase, because of high-degree automation in photographs (images) processing. The main subjects in Photogrammetry (particularly in Digital Photogrammetry) educational process are discussed. Different methods and digital systems are demonstrated with the examples of aerial photogrammetry products. The main objective is to search the possibilities for training in the photogrammetric measurements. Special attention is paid to the stereo plotting from aerial photography applying modified for teaching analytical technology. The integration of functionality of Digital Photogrammetric Systems and Digital Image Processing is analysed as well with an intention of extending the application areas and possibilities for usage of modern technologies in urban mapping and land cadastre. The practical presentation of photos geometry restitution is implemented as significant part of the studies. The interactive teaching for main photogrammetric procedures and controlling systems are highly desirable that without any doubt improve the quality of educational process.
Biometric identification: a holistic perspective
NASA Astrophysics Data System (ADS)
Nadel, Lawrence D.
2007-04-01
Significant advances continue to be made in biometric technology. However, the global war on terrorism and our increasingly electronic society have created the societal need for large-scale, interoperable biometric capabilities that challenge the capabilities of current off-the-shelf technology. At the same time, there are concerns that large-scale implementation of biometrics will infringe our civil liberties and offer increased opportunities for identity theft. This paper looks beyond the basic science and engineering of biometric sensors and fundamental matching algorithms and offers approaches for achieving greater performance and acceptability of applications enabled with currently available biometric technologies. The discussion focuses on three primary biometric system aspects: performance and scalability, interoperability, and cost benefit. Significant improvements in system performance and scalability can be achieved through careful consideration of the following elements: biometric data quality, human factors, operational environment, workflow, multibiometric fusion, and integrated performance modeling. Application interoperability hinges upon some of the factors noted above as well as adherence to interface, data, and performance standards. However, there are times when the price of conforming to such standards can be decreased local system performance. The development of biometric performance-based cost benefit models can help determine realistic requirements and acceptable designs.
ERIC Educational Resources Information Center
Davis, Theresa M.
2013-01-01
Background: There is little evidence that technology acceptance is well understood in healthcare. The hospital environment is complex and dynamic creating a challenge when new technology is introduced because it impacts current processes and workflows which can significantly affect patient care delivery and outcomes. This study tested the effect…
DNAseq Workflow in a Diagnostic Context and an Example of a User Friendly Implementation.
Wolf, Beat; Kuonen, Pierre; Dandekar, Thomas; Atlan, David
2015-01-01
Over recent years next generation sequencing (NGS) technologies evolved from costly tools used by very few, to a much more accessible and economically viable technology. Through this recently gained popularity, its use-cases expanded from research environments into clinical settings. But the technical know-how and infrastructure required to analyze the data remain an obstacle for a wider adoption of this technology, especially in smaller laboratories. We present GensearchNGS, a commercial DNAseq software suite distributed by Phenosystems SA. The focus of GensearchNGS is the optimal usage of already existing infrastructure, while keeping its use simple. This is achieved through the integration of existing tools in a comprehensive software environment, as well as custom algorithms developed with the restrictions of limited infrastructures in mind. This includes the possibility to connect multiple computers to speed up computing intensive parts of the analysis such as sequence alignments. We present a typical DNAseq workflow for NGS data analysis and the approach GensearchNGS takes to implement it. The presented workflow goes from raw data quality control to the final variant report. This includes features such as gene panels and the integration of online databases, like Ensembl for annotations or Cafe Variome for variant sharing.
Booth, Richard G; Sinclair, Barbara; Strudwick, Gillian; Brennan, Laura; Morgan, Lisa; Collings, Stephanie; Johnston, Jessica; Loggie, Brittany; Tong, James; Singh, Chantal
The purpose of this quality improvement project was to better understand how to teach medication administration underpinned by an electronic medication administration record (eMAR) system used in simulated, prelicensure nursing education. Methods included a workflow and integration analysis and a detailed process mapping of both an oral and a sublingual medication administration. Procedural and curriculum development considerations related to medication administration using eMAR technology are presented for nurse educators.
REMORA: a pilot in the ocean of BioMoby web-services.
Carrere, Sébastien; Gouzy, Jérôme
2006-04-01
Emerging web-services technology allows interoperability between multiple distributed architectures. Here, we present REMORA, a web server implemented according to the BioMoby web-service specifications, providing life science researchers with an easy-to-use workflow generator and launcher, a repository of predefined workflows and a survey system. Jerome.Gouzy@toulouse.inra.fr The REMORA web server is freely available at http://bioinfo.genopole-toulouse.prd.fr/remora, sources are available upon request from the authors.
COEUS: “semantic web in a box” for biomedical applications
2012-01-01
Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467
COEUS: "semantic web in a box" for biomedical applications.
Lopes, Pedro; Oliveira, José Luís
2012-12-17
As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
Boes, Peter; Ho, Meng Wei; Li, Zuofeng
2015-01-01
Image‐guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X‐rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post‐treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state‐of‐the‐art Web technologies, a domain model closely matching the workflow, a database‐supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model‐View‐Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client‐side technologies, such as jQuery, jQuery Plug‐ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process. PACS number: 87 PMID:26103504
NASA Astrophysics Data System (ADS)
Thomer, A.
2017-12-01
Data provenance - the record of the varied processes that went into the creation of a dataset, as well as the relationships between resulting data objects - is necessary to support the reusability, reproducibility and reliability of earth science data. In sUAS-based research, capturing provenance can be particularly challenging because of the breadth and distributed nature of the many platforms used to collect, process and analyze data. In any given project, multiple drones, controllers, computers, software systems, sensors, cameras, imaging processing algorithms and data processing workflows are used over sometimes long periods of time. These platforms and processing result in dozens - if not hundreds - of data products in varying stages of readiness-for-analysis and sharing. Provenance tracking mechanisms are needed to make the relationships between these many data products explicit, and therefore more reusable and shareable. In this talk, I discuss opportunities and challenges in tracking provenance in sUAS-based research, and identify gaps in current workflow-capture technologies. I draw on prior work conducted as part of the IMLS-funded Site-Based Data Curation project in which we developed methods of documenting in and ex silico (that is, computational and non-computation) workflows, and demonstrate this approaches applicability to research with sUASes. I conclude with a discussion of ontologies and other semantic technologies that have potential application in sUAS research.
Towards a Brokering Framework for Business Process Execution
NASA Astrophysics Data System (ADS)
Santoro, Mattia; Bigagli, Lorenzo; Roncella, Roberto; Mazzetti, Paolo; Nativi, Stefano
2013-04-01
Advancing our knowledge of environmental phenomena and their interconnections requires an intensive use of environmental models. Due to the complexity of Earth system, the representation of complex environmental processes often requires the use of more than one model (often from different disciplines). The Group on Earth Observation (GEO) launched the Model Web initiative to increase present accessibility and interoperability of environmental models, allowing their flexible composition into complex Business Processes (BPs). A few, basic principles are at the base of the Model Web concept (Nativi, et al.): (i) Open access, (ii) Minimal entry-barriers, (iii) Service-driven approach, and (iv) Scalability. This work proposes an architectural solution, based on the Brokering approach for multidisciplinary interoperability, aiming to contribute to the Model Web vision. The Brokering approach is currently adopted in the new GEOSS Common Infrastructure (GCI) as was presented at the last GEO Plenary meeting in Istanbul, November 2011. We designed and prototyped a component called BP Broker. The high-level functionalities provided by the BP Broker are: • Discover the needed model implementations in an open, distributed and heterogeneous environment; • Check I/O consistency of BPs and provide suggestions for mismatches resolving: • Publish the EBP as a standard model resource for re-use. • Submit the compiled BP (EBP) to a WF-engine for execution. A BP Broker has the following features: • Support multiple abstract BP specifications; • Support encoding in multiple WF-engine languages. According to the Brokering principles, the designed system is flexible enough to support the use of multiple BP design (visual) tools, heterogeneous Web interfaces for model execution (e.g. OGC WPS, WSDL, etc.), and different Workflow engines. The present implementation makes use of BPMN 2.0 notation for BP design and jBPM workflow engine for eBP execution; however, the strong decoupling which characterizes the design of the BP Broker easily allows supporting other technologies. The main benefits of the proposed approach are: (i) no need for a composition infrastructure, (ii) alleviation from technicalities of workflow definitions, (iii) support of incomplete BPs, and (iv) the reuse of existing BPs as atomic processes. The BP Broker was designed and prototyped in the EC funded projects EuroGEOSS (http://www.eurogeoss.eu) and UncertWeb (http://www.uncertweb.org); the latter project provided also the use scenarios that were used to test the framework: the eHabitat scenario (calculation habitat similarity likelihood) and the FERA scenario (impact of climate change on land-use and crop yield). Three more scenarios are presently under development. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreements n. 248488 and n. 226487. References Nativi, S., Mazzetti, P., & Geller, G. (2012), "Environmental model access and interoperability: The GEO Model Web initiative". Environmental Modelling & Software , 1-15
Lemke, Heinz U; Berliner, Leonard
2011-05-01
Appropriate use of information and communication technology (ICT) and mechatronic (MT) systems is viewed by many experts as a means to improve workflow and quality of care in the operating room (OR). This will require a suitable information technology (IT) infrastructure, as well as communication and interface standards, such as specialized extensions of DICOM, to allow data interchange between surgical system components in the OR. A design of such an infrastructure, sometimes referred to as surgical PACS, but better defined as a Therapy Imaging and Model Management System (TIMMS), will be introduced in this article. A TIMMS should support the essential functions that enable and advance image guided therapy, and in the future, a more comprehensive form of patient-model guided therapy. Within this concept, the "image-centric world view" of the classical PACS technology is complemented by an IT "model-centric world view". Such a view is founded in the special patient modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. The modelling aspects refer to both patient information and workflow modelling. Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient OR. The DICOM Working Group 24 (WG-24) has been established to develop DICOM objects and services related to image and model guided surgery. To determine these standards, it is important to define step-by-step surgical workflow practices and create interventional workflow models per procedures or per variable cases. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG-24 is to serve the therapeutic disciplines by enabling modelling technology to be based on standards. Copyright © 2011. Published by Elsevier Ireland Ltd.
PATHA: Performance Analysis Tool for HPC Applications
Yoo, Wucherl; Koo, Michelle; Cao, Yi; ...
2016-02-18
Large science projects rely on complex workflows to analyze terabytes or petabytes of data. These jobs are often running over thousands of CPU cores and simultaneously performing data accesses, data movements, and computation. It is difficult to identify bottlenecks or to debug the performance issues in these large workflows. In order to address these challenges, we have developed Performance Analysis Tool for HPC Applications (PATHA) using the state-of-art open source big data processing tools. Our framework can ingest system logs to extract key performance measures, and apply the most sophisticated statistical tools and data mining methods on the performance data.more » Furthermore, it utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of PATHA, we conduct a case study on the workflows from an astronomy project known as the Palomar Transient Factory (PTF). This study processed 1.6 TB of system logs collected on the NERSC supercomputer Edison. Using PATHA, we were able to identify performance bottlenecks, which reside in three tasks of PTF workflow with the dependency on the density of celestial objects.« less
Formalizing an integrative, multidisciplinary cancer therapy discovery workflow
McGuire, Mary F.; Enderling, Heiko; Wallace, Dorothy I.; Batra, Jaspreet; Jordan, Marie; Kumar, Sushil; Panetta, John C.; Pasquier, Eddy
2014-01-01
Although many clinicians and researchers work to understand cancer, there has been limited success to effectively combine forces and collaborate over time, distance, data and budget constraints. Here we present a workflow template for multidisciplinary cancer therapy that was developed during the 2nd Annual Workshop on Cancer Systems Biology sponsored by Tufts University, Boston, MA in July 2012. The template was applied to the development of a metronomic therapy backbone for neuroblastoma. Three primary groups were identified: clinicians, biologists, and scientists (mathematicians, computer scientists, physicists and engineers). The workflow described their integrative interactions; parallel or sequential processes; data sources and computational tools at different stages as well as the iterative nature of therapeutic development from clinical observations to in vitro, in vivo, and clinical trials. We found that theoreticians in dialog with experimentalists could develop calibrated and parameterized predictive models that inform and formalize sets of testable hypotheses, thus speeding up discovery and validation while reducing laboratory resources and costs. The developed template outlines an interdisciplinary collaboration workflow designed to systematically investigate the mechanistic underpinnings of a new therapy and validate that therapy to advance development and clinical acceptance. PMID:23955390
Meyer, Folker; Bagchi, Saurabh; Chaterji, Somali; Gerlach, Wolfgang; Grama, Ananth; Harrison, Travis; Paczian, Tobias; Trimble, William L; Wilke, Andreas
2017-09-26
As technologies change, MG-RAST is adapting. Newly available software is being included to improve accuracy and performance. As a computational service constantly running large volume scientific workflows, MG-RAST is the right location to perform benchmarking and implement algorithmic or platform improvements, in many cases involving trade-offs between specificity, sensitivity and run-time cost. The work in [Glass EM, Dribinsky Y, Yilmaz P, et al. ISME J 2014;8:1-3] is an example; we use existing well-studied data sets as gold standards representing different environments and different technologies to evaluate any changes to the pipeline. Currently, we use well-understood data sets in MG-RAST as platform for benchmarking. The use of artificial data sets for pipeline performance optimization has not added value, as these data sets are not presenting the same challenges as real-world data sets. In addition, the MG-RAST team welcomes suggestions for improvements of the workflow. We are currently working on versions 4.02 and 4.1, both of which contain significant input from the community and our partners that will enable double barcoding, stronger inferences supported by longer-read technologies, and will increase throughput while maintaining sensitivity by using Diamond and SortMeRNA. On the technical platform side, the MG-RAST team intends to support the Common Workflow Language as a standard to specify bioinformatics workflows, both to facilitate development and efficient high-performance implementation of the community's data analysis tasks. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by US Government employees and is in the public domain in the US.
A preliminary model of work during initial examination and treatment planning appointments.
Irwin, J Y; Torres-Urquidy, M H; Schleyer, T; Monaco, V
2009-01-10
Objective This study's objective was to formally describe the work process for charting and treatment planning in general dental practice to inform the design of a new clinical computing environment.Methods Using a process called contextual inquiry, researchers observed 23 comprehensive examination and treatment planning sessions during 14 visits to 12 general US dental offices. For each visit, field notes were analysed and reformulated as formalised models. Subsequently, each model type was consolidated across all offices and visits. Interruptions to the workflow, called breakdowns, were identified.Results Clinical work during dental examination and treatment planning appointments is a highly collaborative activity involving dentists, hygienists and assistants. Personnel with multiple overlapping roles complete complex multi-step tasks supported by a large and varied collection of equipment, artifacts and technology. Most of the breakdowns were related to technology which interrupted the workflow, caused rework and increased the number of steps in work processes.Conclusion Current dental software could be significantly improved with regard to its support for communication and collaboration, workflow, information design and presentation, information content, and data entry.
Zugaj, D; Chenet, A; Petit, L; Vaglio, J; Pascual, T; Piketty, C; Bourdes, V
2018-02-04
Currently, imaging technologies that can accurately assess or provide surrogate markers of the human cutaneous microvessel network are limited. Dynamic optical coherence tomography (D-OCT) allows the detection of blood flow in vivo and visualization of the skin microvasculature. However, image processing is necessary to correct images, filter artifacts, and exclude irrelevant signals. The objective of this study was to develop a novel image processing workflow to enhance the technical capabilities of D-OCT. Single-center, vehicle-controlled study including healthy volunteers aged 18-50 years. A capsaicin solution was applied topically on the subject's forearm to induce local inflammation. Measurements of capsaicin-induced increase in dermal blood flow, within the region of interest, were performed by laser Doppler imaging (LDI) (reference method) and D-OCT. Sixteen subjects were enrolled. A good correlation was shown between D-OCT and LDI, using the image processing workflow. Therefore, D-OCT offers an easy-to-use alternative to LDI, with good repeatability, new robust morphological features (dermal-epidermal junction localization), and quantification of the distribution of vessel size and changes in this distribution induced by capsaicin. The visualization of the vessel network was improved through bloc filtering and artifact removal. Moreover, the assessment of vessel size distribution allows a fine analysis of the vascular patterns. The newly developed image processing workflow enhances the technical capabilities of D-OCT for the accurate detection and characterization of microcirculation in the skin. A direct clinical application of this image processing workflow is the quantification of the effect of topical treatment on skin vascularization. © 2018 The Authors. Skin Research and Technology Published by John Wiley & Sons Ltd.
How To Organize Technology Areas.
ERIC Educational Resources Information Center
Fickes, Michael
1998-01-01
Highlights three ways a college or university can organize its technology areas to maximize comfort and save space. Examples are provided on controlling equipment sprawl through thoughtful workstation development, designing computer rooms for flexibility and effective cable management, and emulating business setups that are work-flow efficient and…
Vandenberg, Ann E; van Beijnum, Bert-Jan; Overdevest, Vera G P; Capezuti, Elizabeth; Johnson, Theodore M
Falls remain a major geriatric problem, and the search for new solutions continues. We investigated how existing fall prevention technology was experienced within nursing home nurses' environment and workflow. Our NIH-funded study in an American nursing home was followed by a cultural learning exchange with a Dutch nursing home. We constructed two case reports from interview and observational data and compared the magnitude of falls, safety cultures, and technology characteristics and effectiveness. Falls were a high-magnitude problem at the US site, with a collectively vigilant safety culture attending to non-directional audible alarms; falls were a low-magnitude problem at the NL site which employed customizable, infrared sensors that directed text alerts to assigned staff members' mobile devices in patient-centered care culture. Across cases, 1) a coordinated communication system was essential in facilitating effective fall prevention alert response, and 2) nursing home safety culture is tightly associated with the chosen technological system. Copyright © 2016 Elsevier Inc. All rights reserved.
Using conceptual work products of health care to design health IT.
Berry, Andrew B L; Butler, Keith A; Harrington, Craig; Braxton, Melissa O; Walker, Amy J; Pete, Nikki; Johnson, Trevor; Oberle, Mark W; Haselkorn, Jodie; Paul Nichol, W; Haselkorn, Mark
2016-02-01
This paper introduces a new, model-based design method for interactive health information technology (IT) systems. This method extends workflow models with models of conceptual work products. When the health care work being modeled is substantially cognitive, tacit, and complex in nature, graphical workflow models can become too complex to be useful to designers. Conceptual models complement and simplify workflows by providing an explicit specification for the information product they must produce. We illustrate how conceptual work products can be modeled using standard software modeling language, which allows them to provide fundamental requirements for what the workflow must accomplish and the information that a new system should provide. Developers can use these specifications to envision how health IT could enable an effective cognitive strategy as a workflow with precise information requirements. We illustrate the new method with a study conducted in an outpatient multiple sclerosis (MS) clinic. This study shows specifically how the different phases of the method can be carried out, how the method allows for iteration across phases, and how the method generated a health IT design for case management of MS that is efficient and easy to use. Copyright © 2015 Elsevier Inc. All rights reserved.
Automation of lidar-based hydrologic feature extraction workflows using GIS
NASA Astrophysics Data System (ADS)
Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.
2016-10-01
With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.
Pozzi, Alessandro; Arcuri, Lorenzo; Moy, Peter K
2018-03-01
The growing interest in minimally invasive implant placement and delivery of a prefabricated provisional prosthesis immediately, thus minimizing "time to teeth," has led to the development of numerous 3-dimensional (3D) planning software programs. Given the enhancements associated with fully digital workflows, such as better 3D soft-tissue visualization and virtual tooth rendering, computer-guided implant surgery and immediate function has become an effective and reliable procedure. This article describes how modern implant planning software programs provide a comprehensive digital platform that enables efficient interplay between the surgical and restorative aspects of implant treatment. These new technologies that streamline the overall digital workflow allow transformation of the digital wax-up into a personalized, CAD/CAM-milled provisional restoration. Thus, collaborative digital workflows provide a novel approach for time-efficient delivery of a customized, screw-retained provisional restoration on the day of implant surgery, resulting in improved predictability for immediate function in the partially edentate patient.
Hbim Methodology as a Bridge Between Italy and Argentina
NASA Astrophysics Data System (ADS)
Moreira, A.; Quattrini, R.; Maggiolo, G.; Mammoli, R.
2018-05-01
The availability of efficient HBIM workflows could represent a very important change towards a more efficient management of the historical real estate. The present work shows how to obtain accurate and reliable information of heritage buildings through reality capture and 3D modelling to support restoration purposes or knowledge-based applications. Two cases studies metaphorically joint Italy with Argentina. The research article explains the workflows applied at the Palazzo Ferretti at Ancona and the Manzana Histórica de la Universidad National del Litoral, providing a constructive comparison and blending technological and theoretical approaches. In a bottom-up process, the assessment of two cases study validates a workflow allowing the achievement of a useful and proper data enrichment of each HBIM model. Another key aspect is the Level of Development (LOD) evaluation of both models: different ranges and scales are defined in America (100-500) and in Italy (A-G), nevertheless is possible to obtain standard shared procedures, enabling facilitation of HBIM development and diffusion in operating workflows.
Exploring Local Public Health Workflow in the Context of Automated Translation Technologies
Mandel, Hannah; Turner, Anne M.
2013-01-01
Despite the growing limited English proficiency (LEP) population in the US, and federal regulations requiring multilingual health information be available for LEP individuals, there is a lack of available high quality multilingual health promotion materials. The costs and personnel time associated with creating high quality translations serve as barriers to their creation, especially in resource limited public health settings. To explore the potential adoption of novel machine translation and document dissemination technologies for improving the creation and sharing of translated public health materials, we interviewed key health department personnel in Washington State. We analyzed translation workflow, elucidated key themes regarding public health translation work, and assessed attitudes towards electronic document exchange and machine translation. Public health personnel expressed the need for human quality assurance and oversight, but appreciated the potential of novel information technologies to assist in the production and dissemination of translated materials for public health practice. PMID:24551385
MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.
Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y
2018-01-02
Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .
Latimer, Luke N; Dueber, John E
2017-06-01
A common challenge in metabolic engineering is rapidly identifying rate-controlling enzymes in heterologous pathways for subsequent production improvement. We demonstrate a workflow to address this challenge and apply it to improving xylose utilization in Saccharomyces cerevisiae. For eight reactions required for conversion of xylose to ethanol, we screened enzymes for functional expression in S. cerevisiae, followed by a combinatorial expression analysis to achieve pathway flux balancing and identification of limiting enzymatic activities. In the next round of strain engineering, we increased the copy number of these limiting enzymes and again tested the eight-enzyme combinatorial expression library in this new background. This workflow yielded a strain that has a ∼70% increase in biomass yield and ∼240% increase in xylose utilization. Finally, we chromosomally integrated the expression library. This library enriched for strains with multiple integrations of the pathway, which likely were the result of tandem integrations mediated by promoter homology. Biotechnol. Bioeng. 2017;114: 1301-1309. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Efficient genome editing of differentiated renal epithelial cells.
Hofherr, Alexis; Busch, Tilman; Huber, Nora; Nold, Andreas; Bohn, Albert; Viau, Amandine; Bienaimé, Frank; Kuehn, E Wolfgang; Arnold, Sebastian J; Köttgen, Michael
2017-02-01
Recent advances in genome editing technologies have enabled the rapid and precise manipulation of genomes, including the targeted introduction, alteration, and removal of genomic sequences. However, respective methods have been described mainly in non-differentiated or haploid cell types. Genome editing of well-differentiated renal epithelial cells has been hampered by a range of technological issues, including optimal design, efficient expression of multiple genome editing constructs, attainable mutation rates, and best screening strategies. Here, we present an easily implementable workflow for the rapid generation of targeted heterozygous and homozygous genomic sequence alterations in renal cells using transcription activator-like effector nucleases (TALENs) and the clustered regularly interspaced short palindromic repeat (CRISPR) system. We demonstrate the versatility of established protocols by generating novel cellular models for studying autosomal dominant polycystic kidney disease (ADPKD). Furthermore, we show that cell culture-validated genetic modifications can be readily applied to mouse embryonic stem cells (mESCs) for the generation of corresponding mouse models. The described procedure for efficient genome editing can be applied to any cell type to study physiological and pathophysiological functions in the context of precisely engineered genotypes.
Improving OCD time to solution using Signal Response Metrology
NASA Astrophysics Data System (ADS)
Fang, Fang; Zhang, Xiaoxiao; Vaid, Alok; Pandev, Stilian; Sanko, Dimitry; Ramanathan, Vidya; Venkataraman, Kartik; Haupt, Ronny
2016-03-01
In recent technology nodes, advanced process and novel integration scheme have challenged the precision limits of conventional metrology; with critical dimensions (CD) of device reduce to sub-nanometer region. Optical metrology has proved its capability to precisely detect intricate details on the complex structures, however, conventional RCWA-based (rigorous coupled wave analysis) scatterometry has the limitations of long time-to-results and lack of flexibility to adapt to wide process variations. Signal Response Metrology (SRM) is a new metrology technique targeted to alleviate the consumption of engineering and computation resources by eliminating geometric/dispersion modeling and spectral simulation from the workflow. This is achieved by directly correlating the spectra acquired from a set of wafers with known process variations encoded. In SPIE 2015, we presented the results of SRM application in lithography metrology and control [1], accomplished the mission of setting up a new measurement recipe of focus/dose monitoring in hours. This work will demonstrate our recent field exploration of SRM implementation in 20nm technology and beyond, including focus metrology for scanner control; post etch geometric profile measurement, and actual device profile metrology.
Implementation and Challenges of Direct Acoustic Dosing into Cell-Based Assays.
Roberts, Karen; Callis, Rowena; Ikeda, Tim; Paunovic, Amalia; Simpson, Carly; Tang, Eric; Turton, Nick; Walker, Graeme
2016-02-01
Since the adoption of Labcyte Echo Acoustic Droplet Ejection (ADE) technology by AstraZeneca in 2005, ADE has become the preferred method for compound dosing into both biochemical and cell-based assays across AstraZeneca research and development globally. The initial implementation of Echos and the direct dosing workflow provided AstraZeneca with a unique set of challenges. In this article, we outline how direct Echo dosing has evolved over the past decade in AstraZeneca. We describe the practical challenges of applying ADE technology to 96-well, 384-well, and 1536-well assays and how AstraZeneca developed and applied software and robotic solutions to generate fully automated and effective cell-based assay workflows. © 2015 Society for Laboratory Automation and Screening.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
A Community-Driven Workflow Recommendations and Reuse Infrastructure
NASA Astrophysics Data System (ADS)
Zhang, J.; Votava, P.; Lee, T. J.; Lee, C.; Xiao, S.; Nemani, R. R.; Foster, I.
2013-12-01
Aiming to connect the Earth science community to accelerate the rate of discovery, NASA Earth Exchange (NEX) has established an online repository and platform, so that researchers can publish and share their tools and models with colleagues. In recent years, workflow has become a popular technique at NEX for Earth scientists to define executable multi-step procedures for data processing and analysis. The ability to discover and reuse knowledge (sharable workflows or workflow) is critical to the future advancement of science. However, as reported in our earlier study, the reusability of scientific artifacts at current time is very low. Scientists often do not feel confident in using other researchers' tools and utilities. One major reason is that researchers are often unaware of the existence of others' data preprocessing processes. Meanwhile, researchers often do not have time to fully document the processes and expose them to others in a standard way. These issues cannot be overcome by the existing workflow search technologies used in NEX and other data projects. Therefore, this project aims to develop a proactive recommendation technology based on collective NEX user behaviors. In this way, we aim to promote and encourage process and workflow reuse within NEX. Particularly, we focus on leveraging peer scientists' best practices to support the recommendation of artifacts developed by others. Our underlying theoretical foundation is rooted in the social cognitive theory, which declares people learn by watching what others do. Our fundamental hypothesis is that sharable artifacts have network properties, much like humans in social networks. More generally, reusable artifacts form various types of social relationships (ties), and may be viewed as forming what organizational sociologists who use network analysis to study human interactions call a 'knowledge network.' In particular, we will tackle two research questions: R1: What hidden knowledge may be extracted from usage history to help Earth scientists better understand existing artifacts and how to use them in a proper manner? R2: Informed by insights derived from their computing contexts, how could such hidden knowledge be used to facilitate artifact reuse by Earth scientists? Our study of the two research questions will provide answers to three technical questions aiming to assist NEX users during workflow development: 1) How to determine what topics interest the researcher? 2) How to find appropriate artifacts? and 3) How to advise the researcher in artifact reuse? In this paper, we report our on-going efforts of leveraging social networking theory and analysis techniques to provide dynamic advice on artifact reuse to NEX users based on their surrounding contexts. As a proof of concept, we have designed and developed a plug-in to the VisTrails workflow design tool. When users develop workflows using VisTrails, our plug-in will proactively recommend most relevant sub-workflows to the users.
75 FR 8043 - National Institute of Standards and Technology
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-23
... the National Institute of Standards and Technology (``NIST''), U.S. Department of Commerce, is... and commonwealths, to NIST's interest in the invention embodied in U.S. Patent No. 6,088,679 (Application No. 08/ 980,908), titled ``Workflow Management Employing Role-based Access Control,'' NIST Docket...
What Not To Do: Anti-patterns for Developing Scientific Workflow Software Components
NASA Astrophysics Data System (ADS)
Futrelle, J.; Maffei, A. R.; Sosik, H. M.; Gallager, S. M.; York, A.
2013-12-01
Scientific workflows promise to enable efficient scaling-up of researcher code to handle large datasets and workloads, as well as documentation of scientific processing via standardized provenance records, etc. Workflow systems and related frameworks for coordinating the execution of otherwise separate components are limited, however, in their ability to overcome software engineering design problems commonly encountered in pre-existing components, such as scripts developed externally by scientists in their laboratories. In practice, this often means that components must be rewritten or replaced in a time-consuming, expensive process. In the course of an extensive workflow development project involving large-scale oceanographic image processing, we have begun to identify and codify 'anti-patterns'--problematic design characteristics of software--that make components fit poorly into complex automated workflows. We have gone on to develop and document low-effort solutions and best practices that efficiently address the anti-patterns we have identified. The issues, solutions, and best practices can be used to evaluate and improve existing code, as well as guiding the development of new components. For example, we have identified a common anti-pattern we call 'batch-itis' in which a script fails and then cannot perform more work, even if that work is not precluded by the failure. The solution we have identified--removing unnecessary looping over independent units of work--is often easier to code than the anti-pattern, as it eliminates the need for complex control flow logic in the component. Other anti-patterns we have identified are similarly easy to identify and often easy to fix. We have drawn upon experience working with three science teams at Woods Hole Oceanographic Institution, each of which has designed novel imaging instruments and associated image analysis code. By developing use cases and prototypes within these teams, we have undertaken formal evaluations of software components developed by programmers with widely varying levels of expertise, and have been able to discover and characterize a number of anti-patterns. Our evaluation methodology and testbed have also enabled us to assess the efficacy of strategies to address these anti-patterns according to scientifically relevant metrics, such as ability of algorithms to perform faster than the rate of data acquisition and the accuracy of workflow component output relative to ground truth. The set of anti-patterns and solutions we have identified augments of the body of more well-known software engineering anti-patterns by addressing additional concerns that obtain when a software component has to function as part of a workflow assembled out of independently-developed codebases. Our experience shows that identifying and resolving these anti-patterns reduces development time and improves performance without reducing component reusability.
Informal Learning after Organizational Change
ERIC Educational Resources Information Center
Reardon, Robert F.
2004-01-01
This inductive, qualitative study investigates how learning took place among nine experienced engineers in an industrial setting after a major reorganization. A thematic analysis of the transcripts revealed that the learning was informal and that it fell into three distinct categories: learning new workflows, learning about the chemical process,…
ASCEM Data Brower (ASCEMDB) v0.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
ROMOSAN, ALEXANDRU
Data management tool designed for the Advanced Simulation Capability for Environmental Management (ASCEM) framework. Distinguishing features of this gateway include: (1) handling of complex geometry data, (2) advance selection mechanism, (3) state of art rendering of spatiotemporal data records, and (4) seamless integration with a distributed workflow engine.
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam
2015-01-01
The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.
The Cancer Analysis Virtual Machine (CAVM) project will leverage cloud technology, the UCSC Cancer Genomics Browser, and the Galaxy analysis workflow system to provide investigators with a flexible, scalable platform for hosting, visualizing and analyzing their own genomic data.
USDA-ARS?s Scientific Manuscript database
Remarkable advances in next-generation sequencing (NGS) technologies, bioinformatics algorithms, and computational technologies have significantly accelerated genomic research. However, complicated NGS data analysis still remains as a major bottleneck. RNA-seq, as one of the major area in the NGS fi...
Integrated Communications and Work Efficiency: Impacts on Organizational Structure and Power.
ERIC Educational Resources Information Center
Wigand, Rolf T.
This paper reviews the work environment surrounding integrated office systems, synthesizes the known effects of automated office technologies, and discusses their impact on work efficiency in office environments. Particular attention is given to the effect of automated technologies on networks, workflow/processes, and organizational structure and…
Preclinical Feasibility of a Technology Framework for MRI-guided Iliac Angioplasty
Rube, Martin A.; Fernandez-Gutierrez, Fabiola; Cox, Benjamin F.; Holbrook, Andrew B.; Houston, J. Graeme; White, Richard D.; McLeod, Helen; Fatahi, Mahsa; Melzer, Andreas
2015-01-01
Purpose Interventional MRI has significant potential for image guidance of iliac angioplasty and related vascular procedures. A technology framework with in-room image display, control, communication and MRI-guided intervention techniques was designed and tested for its potential to provide safe, fast and efficient MRI-guided angioplasty of the iliac arteries. Methods A 1.5T MRI scanner was adapted for interactive imaging during endovascular procedures using new or modified interventional devices such as guidewires and catheters. A perfused vascular phantom was used for testing. Pre-, intra- and post-procedural visualization and measurement of vascular morphology and flow was implemented. A detailed analysis of X-Ray fluoroscopic angiography workflow was conducted and applied. Two interventional radiologists and one physician in training performed 39 procedures. All procedures were timed and analyzed. Results MRI-guided iliac angioplasty procedures were successfully performed with progressive adaptation of techniques and workflow. The workflow, setup and protocol enabled a reduction in table time for a dedicated MRI-guided procedure to 6 min 33 s with a mean procedure time of 9 min 2 s, comparable to the mean procedure time of 8 min 42 s for the standard X-Ray guided procedure. Conclusions MRI-guided iliac vascular interventions were found to be feasible and practical using this framework and optimized workflow. In particular the real-time flow analysis was found to be helpful for pre- and post-interventional assessments. Design optimization of the catheters and in vivo experiments are required before clinical evaluation. PMID:25102933
Workflow of CAD / CAM Scoliosis Brace Adjustment in Preparation Using 3D Printing.
Weiss, Hans-Rudolf; Tournavitis, Nicos; Nan, Xiaofeng; Borysov, Maksym; Paul, Lothar
2017-01-01
High correction bracing is the most effective conservative treatment for patients with scoliosis during growth. Still today braces for the treatment of scoliosis are made by casting patients while computer aided design (CAD) and computer aided manufacturing (CAM) is available with all possibilities to standardize pattern specific brace treatment and improve wearing comfort. CAD / CAM brace production mainly relies on carving a polyurethane foam model which is the basis for vacuuming a polyethylene (PE) or polypropylene (PP) brace. Purpose of this short communication is to describe the workflow currently used and to outline future requirements with respect to 3D printing technology. Description of the steps of virtual brace adjustment as available today are content of this paper as well as an outline of the great potential there is for the future 3D printing technology. For 3D printing of scoliosis braces it is necessary to establish easy to use software plug-ins in order to allow adding 3D printing technology to the current workflow of virtual CAD / CAM brace adjustment. Textures and structures can be added to the brace models at certain well defined locations offering the potential of more wearing comfort without losing in-brace correction. Advances have to be made in the field of CAD / CAM software tools with respect to design and generation of individually structured brace models based on currently well established and standardized scoliosis brace libraries.
SADI, SHARE, and the in silico scientific method
2010-01-01
Background The emergence and uptake of Semantic Web technologies by the Life Sciences provides exciting opportunities for exploring novel ways to conduct in silico science. Web Service Workflows are already becoming first-class objects in “the new way”, and serve as explicit, shareable, referenceable representations of how an experiment was done. In turn, Semantic Web Service projects aim to facilitate workflow construction by biological domain-experts such that workflows can be edited, re-purposed, and re-published by non-informaticians. However the aspects of the scientific method relating to explicit discourse, disagreement, and hypothesis generation have remained relatively impervious to new technologies. Results Here we present SADI and SHARE - a novel Semantic Web Service framework, and a reference implementation of its client libraries. Together, SADI and SHARE allow the semi- or fully-automatic discovery and pipelining of Semantic Web Services in response to ad hoc user queries. Conclusions The semantic behaviours exhibited by SADI and SHARE extend the functionalities provided by Description Logic Reasoners such that novel assertions can be automatically added to a data-set without logical reasoning, but rather by analytical or annotative services. This behaviour might be applied to achieve the “semantification” of those aspects of the in silico scientific method that are not yet supported by Semantic Web technologies. We support this suggestion using an example in the clinical research space. PMID:21210986
Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan
2018-02-01
The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Kranzfelder, Michael; Schneider, Armin; Gillen, Sonja; Feussner, Hubertus
2011-03-01
Technical progress in the operating room (OR) increases constantly, but advanced techniques for error prevention are lacking. It has been the vision to create intelligent OR systems ("autopilot") that not only collect intraoperative data but also interpret whether the course of the operation is normal or deviating from the schedule ("situation awareness"), to recommend the adequate next steps of the intervention, and to identify imminent risky situations. Recently introduced technologies in health care for real-time data acquisition (bar code, radiofrequency identification [RFID], voice and emotion recognition) may have the potential to meet these demands. This report aims to identify, based on the authors' institutional experience and a review of the literature (MEDLINE search 2000-2010), which technologies are currently most promising for providing the required data and to describe their fields of application and potential limitations. Retrieval of information on the functional state of the peripheral devices in the OR is technically feasible by continuous sensor-based data acquisition and online analysis. Using bar code technologies, automatic instrument identification seems conceivable, with information given about the actual part of the procedure and indication of any change in the routine workflow. The dynamics of human activities also comprise key information. A promising technology for continuous personnel tracking is data acquisition with RFID. Emotional data capture and analysis in the OR are difficult. Although technically feasible, nonverbal emotion recognition is difficult to assess. In contrast, emotion recognition by speech seems to be a promising technology for further workflow prediction. The presented technologies are a first step to achieving an increased situational awareness in the OR. However, workflow definition in surgery is feasible only if the procedure is standardized, the peculiarities of the individual patient are taken into account, the level of the surgeon's expertise is regarded, and a comprehensive data capture can be obtained.
Denis, Jean-Baptiste; Vandenbogaert, Mathias; Caro, Valérie
2016-01-01
The detection and characterization of emerging infectious agents has been a continuing public health concern. High Throughput Sequencing (HTS) or Next-Generation Sequencing (NGS) technologies have proven to be promising approaches for efficient and unbiased detection of pathogens in complex biological samples, providing access to comprehensive analyses. As NGS approaches typically yield millions of putatively representative reads per sample, efficient data management and visualization resources have become mandatory. Most usually, those resources are implemented through a dedicated Laboratory Information Management System (LIMS), solely to provide perspective regarding the available information. We developed an easily deployable web-interface, facilitating management and bioinformatics analysis of metagenomics data-samples. It was engineered to run associated and dedicated Galaxy workflows for the detection and eventually classification of pathogens. The web application allows easy interaction with existing Galaxy metagenomic workflows, facilitates the organization, exploration and aggregation of the most relevant sample-specific sequences among millions of genomic sequences, allowing them to determine their relative abundance, and associate them to the most closely related organism or pathogen. The user-friendly Django-Based interface, associates the users’ input data and its metadata through a bio-IT provided set of resources (a Galaxy instance, and both sufficient storage and grid computing power). Galaxy is used to handle and analyze the user’s input data from loading, indexing, mapping, assembly and DB-searches. Interaction between our application and Galaxy is ensured by the BioBlend library, which gives API-based access to Galaxy’s main features. Metadata about samples, runs, as well as the workflow results are stored in the LIMS. For metagenomic classification and exploration purposes, we show, as a proof of concept, that integration of intuitive exploratory tools, like Krona for representation of taxonomic classification, can be achieved very easily. In the trend of Galaxy, the interface enables the sharing of scientific results to fellow team members. PMID:28451381
Correia, Damien; Doppelt-Azeroual, Olivia; Denis, Jean-Baptiste; Vandenbogaert, Mathias; Caro, Valérie
2015-01-01
The detection and characterization of emerging infectious agents has been a continuing public health concern. High Throughput Sequencing (HTS) or Next-Generation Sequencing (NGS) technologies have proven to be promising approaches for efficient and unbiased detection of pathogens in complex biological samples, providing access to comprehensive analyses. As NGS approaches typically yield millions of putatively representative reads per sample, efficient data management and visualization resources have become mandatory. Most usually, those resources are implemented through a dedicated Laboratory Information Management System (LIMS), solely to provide perspective regarding the available information. We developed an easily deployable web-interface, facilitating management and bioinformatics analysis of metagenomics data-samples. It was engineered to run associated and dedicated Galaxy workflows for the detection and eventually classification of pathogens. The web application allows easy interaction with existing Galaxy metagenomic workflows, facilitates the organization, exploration and aggregation of the most relevant sample-specific sequences among millions of genomic sequences, allowing them to determine their relative abundance, and associate them to the most closely related organism or pathogen. The user-friendly Django-Based interface, associates the users' input data and its metadata through a bio-IT provided set of resources (a Galaxy instance, and both sufficient storage and grid computing power). Galaxy is used to handle and analyze the user's input data from loading, indexing, mapping, assembly and DB-searches. Interaction between our application and Galaxy is ensured by the BioBlend library, which gives API-based access to Galaxy's main features. Metadata about samples, runs, as well as the workflow results are stored in the LIMS. For metagenomic classification and exploration purposes, we show, as a proof of concept, that integration of intuitive exploratory tools, like Krona for representation of taxonomic classification, can be achieved very easily. In the trend of Galaxy, the interface enables the sharing of scientific results to fellow team members.
Smith, M.; Murphy, D.; Laxmisan, A.; Sittig, D.; Reis, B.; Esquivel, A.; Singh, H.
2013-01-01
Summary Background Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider’s prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. Objectives The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. Methods We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA’s EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Results Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility’s “test” EHR system, thus demonstrating technical compatibility. Conclusion To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results. PMID:24155789
Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H
2013-01-01
Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results.
Grid-based platform for training in Earth Observation
NASA Astrophysics Data System (ADS)
Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor
2010-05-01
GiSHEO platform [1] providing on-demand services for training and high education in Earth Observation is developed, in the frame of an ESA funded project through its PECS programme, to respond to the needs of powerful education resources in remote sensing field. It intends to be a Grid-based platform of which potential for experimentation and extensibility are the key benefits compared with a desktop software solution. Near-real time applications requiring simultaneous multiple short-time-response data-intensive tasks, as in the case of a short time training event, are the ones that are proved to be ideal for this platform. The platform is based on Globus Toolkit 4 facilities for security and process management, and on the clusters of four academic institutions involved in the project. The authorization uses a VOMS service. The main public services are the followings: the EO processing services (represented through special WSRF-type services); the workflow service exposing a particular workflow engine; the data indexing and discovery service for accessing the data management mechanisms; the processing services, a collection allowing easy access to the processing platform. The WSRF-type services for basic satellite image processing are reusing free image processing tools, OpenCV and GDAL. New algorithms and workflows were develop to tackle with challenging problems like detecting the underground remains of old fortifications, walls or houses. More details can be found in [2]. Composed services can be specified through workflows and are easy to be deployed. The workflow engine, OSyRIS (Orchestration System using a Rule based Inference Solution), is based on DROOLS, and a new rule-based workflow language, SILK (SImple Language for worKflow), has been built. Workflow creation in SILK can be done with or without a visual designing tools. The basics of SILK are the tasks and relations (rules) between them. It is similar with the SCUFL language, but not relying on XML in order to allow the introduction of more workflow specific issues. Moreover, an event-condition-action (ECA) approach allows a greater flexibility when expressing data and task dependencies, as well as the creation of adaptive workflows which can react to changes in the configuration of the Grid or in the workflow itself. Changes inside the grid are handled by creating specific rules which allow resource selection based on various task scheduling criteria. Modifications of the workflow are usually accomplished either by inserting or retracting at runtime rules belonging to it or by modifying the executor of the task in case a better one is found. The former implies changes in its structure while the latter does not necessarily mean changes of the resource but more precisely changes of the algorithm used for solving the task. More details can be found in [3]. Another important platform component is the data indexing and storage service, GDIS, providing features for data storage, indexing data using a specialized RDBMS, finding data by various conditions, querying external services and keeping track of temporary data generated by other components. The data storage component part of GDIS is responsible for storing the data by using available storage backends such as local disk file systems (ext3), local cluster storage (GFS) or distributed file systems (HDFS). A front-end GridFTP service is capable of interacting with the storage domains on behalf of the clients and in a uniform way and also enforces the security restrictions provided by other specialized services and related with data access. The data indexing is performed by PostGIS. An advanced and flexible interface for searching the project's geographical repository is built around a custom query language (LLQL - Lisp Like Query Language) designed to provide fine grained access to the data in the repository and to query external services (e.g. for exploiting the connection with GENESI-DR catalog). More details can be found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [2] D. Petcu, D. Zaharie, M. Neagul, S. Panica, M. Frincu, D. Gorgan, T. Stefanut, V. Bacu, Remote Sensed Image Processing on Grids for Training in Earth Observation. In Image Processing, V. Kordic (ed.), In-Tech, January 2010. [3] M. Neagul, S. Panica, D. Petcu, D. Zaharie, D. Gorgan, Web and Grid Services for Training in Earth Observation, IDAACS 2009, IEEE Computer Press, 241-246 [4] M. Frincu, S. Panica, M. Neagul, D. Petcu, Gisheo: On Demand Grid Service Based Platform for EO Data Processing. HiperGrid 2009, Politehnica Press, 415-422. [5] D. Gorgan, T. Stefanut, V. Bacu, Grid Based Training Environment for Earth Observation, GPC 2009, LNCS 5529, 98-109
Desktop Publishing in the University: Current Progress, Future Visions.
ERIC Educational Resources Information Center
Smith, Thomas W.
1989-01-01
Discussion of the workflow involved in desktop publishing focuses on experiences at the College of Engineering at the University of Wisconsin at Madison. Highlights include cost savings and productivity gains in page layout and composition; editing, translation, and revision issues; printing and distribution; and benefits to the reader. (LRW)
ERIC Educational Resources Information Center
Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew
2015-01-01
Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…
The Brave New World of Learning.
ERIC Educational Resources Information Center
Adkins, Sam S.
2003-01-01
Explores how old and new training technologies are converging. Discusses the concept of integrated applications and provides a taxonomy of convergent enterprise applications for use with real-time workflow. (JOW)
Reddy, Madhu C; Paul, Sharoda A; Abraham, Joanna; McNeese, Michael; DeFlitch, Christopher; Yen, John
2009-04-01
The purpose of this study is to identify the major challenges to coordination between emergency department (ED) teams and emergency medical services (EMS) teams. We conducted a series of focus groups involving both ED and EMS team members using a crisis scenario as the basis of the focus group discussion. We also collected organizational workflow data. We identified three major challenges to coordination between ED and EMS teams including ineffectiveness of current information and communication technologies, lack of common ground, and breakdowns in information flow. The three challenges highlight the importance of designing systems from socio-technical perspective. In particular, these inter-team coordination systems must support socio-technical issues such as awareness, context, and workflow between the two teams.
Streamlining the Design-to-Build Transition with Build-Optimization Software Tools.
Oberortner, Ernst; Cheng, Jan-Fang; Hillson, Nathan J; Deutsch, Samuel
2017-03-17
Scaling-up capabilities for the design, build, and test of synthetic biology constructs holds great promise for the development of new applications in fuels, chemical production, or cellular-behavior engineering. Construct design is an essential component in this process; however, not every designed DNA sequence can be readily manufactured, even using state-of-the-art DNA synthesis methods. Current biological computer-aided design and manufacture tools (bioCAD/CAM) do not adequately consider the limitations of DNA synthesis technologies when generating their outputs. Designed sequences that violate DNA synthesis constraints may require substantial sequence redesign or lead to price-premiums and temporal delays, which adversely impact the efficiency of the DNA manufacturing process. We have developed a suite of build-optimization software tools (BOOST) to streamline the design-build transition in synthetic biology engineering workflows. BOOST incorporates knowledge of DNA synthesis success determinants into the design process to output ready-to-build sequences, preempting the need for sequence redesign. The BOOST web application is available at https://boost.jgi.doe.gov and its Application Program Interfaces (API) enable integration into automated, customized DNA design processes. The herein presented results highlight the effectiveness of BOOST in reducing DNA synthesis costs and timelines.
SearchGUI: A Highly Adaptable Common Interface for Proteomics Search and de Novo Engines.
Barsnes, Harald; Vaudel, Marc
2018-05-25
Mass-spectrometry-based proteomics has become the standard approach for identifying and quantifying proteins. A vital step consists of analyzing experimentally generated mass spectra to identify the underlying peptide sequences for later mapping to the originating proteins. We here present the latest developments in SearchGUI, a common open-source interface for the most frequently used freely available proteomics search and de novo engines that has evolved into a central component in numerous bioinformatics workflows.
An architecture model for multiple disease management information systems.
Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei
2013-04-01
Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.
An ontological knowledge framework for adaptive medical workflow.
Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir
2008-10-01
As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly.
miRNA assays in the clinical laboratory: workflow, detection technologies and automation aspects.
Kappel, Andreas; Keller, Andreas
2017-05-01
microRNAs (miRNAs) are short non-coding RNA molecules that regulate gene expression in eukaryotes. Their differential abundance is indicative or even causative for a variety of pathological processes including cancer or cardiovascular disorders. Due to their important biological function, miRNAs represent a promising class of novel biomarkers that may be used to diagnose life-threatening diseases, and to monitor disease progression. Further, they may guide treatment selection or dosage of drugs. miRNAs from blood or derived fractions are particularly interesting candidates for routine laboratory applications, as they can be measured in most clinical laboratories already today. This assures a good accessibility of respective tests. Albeit their great potential, miRNA-based diagnostic tests have not made their way yet into the clinical routine, and hence no standardized workflows have been established to measure miRNAs for patients' benefit. In this review we summarize the detection technologies and workflow options that exist to measure miRNAs, and we describe the advantages and disadvantages of each of these options. Moreover, we also provide a perspective on data analysis aspects that are vital for translation of raw data into actionable diagnostic test results.
Adventures in Private Cloud: Balancing Cost and Capability at the CloudSat Data Processing Center
NASA Astrophysics Data System (ADS)
Partain, P.; Finley, S.; Fluke, J.; Haynes, J. M.; Cronk, H. Q.; Miller, S. D.
2016-12-01
Since the beginning of the CloudSat Mission in 2006, The CloudSat Data Processing Center (DPC) at the Cooperative Institute for Research in the Atmosphere (CIRA) has been ingesting data from the satellite and other A-Train sensors, producing data products, and distributing them to researchers around the world. The computing infrastructure was specifically designed to fulfill the requirements as specified at the beginning of what nominally was a two-year mission. The environment consisted of servers dedicated to specific processing tasks in a rigid workflow to generate the required products. To the benefit of science and with credit to the mission engineers, CloudSat has lasted well beyond its planned lifetime and is still collecting data ten years later. Over that period requirements of the data processing system have greatly expanded and opportunities for providing value-added services have presented themselves. But while demands on the system have increased, the initial design allowed for very little expansion in terms of scalability and flexibility. The design did change to include virtual machine processing nodes and distributed workflows but infrastructure management was still a time consuming task when system modification was required to run new tests or implement new processes. To address the scalability, flexibility, and manageability of the system Cloud computing methods and technologies are now being employed. The use of a public cloud like Amazon Elastic Compute Cloud or Google Compute Engine was considered but, among other issues, data transfer and storage cost becomes a problem especially when demand fluctuates as a result of reprocessing and the introduction of new products and services. Instead, the existing system was converted to an on premises private Cloud using the OpenStack computing platform and Ceph software defined storage to reap the benefits of the Cloud computing paradigm. This work details the decisions that were made, the benefits that have been realized, the difficulties that were encountered and issues that still exist.
Wireless remote control clinical image workflow: utilizing a PDA for offsite distribution
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Documet, Luis; Documet, Jorge; Huang, H. K.; Muldoon, Jean
2004-04-01
Last year we presented in RSNA an application to perform wireless remote control of PACS image distribution utilizing a handheld device such as a Personal Digital Assistant (PDA). This paper describes the clinical experiences including workflow scenarios of implementing the PDA application to route exams from the clinical PACS archive server to various locations for offsite distribution of clinical PACS exams. By utilizing this remote control application, radiologists can manage image workflow distribution with a single wireless handheld device without impacting their clinical workflow on diagnostic PACS workstations. A PDA application was designed and developed to perform DICOM Query and C-Move requests by a physician from a clinical PACS Archive to a CD-burning device for automatic burning of PACS data for the distribution to offsite. In addition, it was also used for convenient routing of historical PACS exams to the local web server, local workstations, and teleradiology systems. The application was evaluated by radiologists as well as other clinical staff who need to distribute PACS exams to offsite referring physician"s offices and offsite radiologists. An application for image workflow management utilizing wireless technology was implemented in a clinical environment and evaluated. A PDA application was successfully utilized to perform DICOM Query and C-Move requests from the clinical PACS archive to various offsite exam distribution devices. Clinical staff can utilize the PDA to manage image workflow and PACS exam distribution conveniently for offsite consultations by referring physicians and radiologists. This solution allows the radiologist to expand their effectiveness in health care delivery both within the radiology department as well as offisite by improving their clinical workflow.
Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian
2016-03-04
Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.
Three-dimensional printing: changing clinical care or just a passing fad?
Ryan, Justin; Gregg, Chelsea; Frakes, David; Pophal, Stephen
2017-01-01
Advances in medical imaging and three-dimensional (3D) reconstruction software have enabled a proliferation of 3D modeling and 3D printing for clinical applications. In particular, 3D printing has garnered an extraordinary media presence over the past few years. There is growing optimism that 3D printing can address patient specificity and complexity for improved interventional and surgical planning. Will this relatively untested technology bring about a paradigm shift in the clinical environment, or is it just a transient fad? Case studies and series centered around 3D printing are omnipresent in clinical and engineering journals. These primarily qualitative studies support the potential efficacy of the emerging technology. Few studies analyze the value of 3D printing, weighing its potential benefits against increasing costs (e.g., institutional overhead, labor, and materials). Clinical integration of 3D printing is growing rapidly, and its adoption into clinical practice presents unique workflow challenges. There are numerous clinical trials on the horizon that will finally help to elucidate the measured impact of 3D printing on clinical outcomes through quantitative analyses of clinical and economic metrics. The contrived integration of 3D printing into clinical practice seems all but certain as the value of this technology becomes more and more evident.
Automated quality control in a file-based broadcasting workflow
NASA Astrophysics Data System (ADS)
Zhang, Lina
2014-04-01
Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.
New data model with better functionality for VLab
NASA Astrophysics Data System (ADS)
da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.
2009-12-01
The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.
Schedule-Aware Workflow Management Systems
NASA Astrophysics Data System (ADS)
Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.
Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC
Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi
2018-03-19
The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less
Comas, Jorge; Benfeitas, Rui; Vilaprinyo, Ester; Sorribas, Albert; Solsona, Francesc; Farré, Gemma; Berman, Judit; Zorrilla, Uxue; Capell, Teresa; Sandmann, Gerhard; Zhu, Changfu; Christou, Paul; Alves, Rui
2016-09-01
Plant synthetic biology is still in its infancy. However, synthetic biology approaches have been used to manipulate and improve the nutritional and health value of staple food crops such as rice, potato and maize. With current technologies, production yields of the synthetic nutrients are a result of trial and error, and systematic rational strategies to optimize those yields are still lacking. Here, we present a workflow that combines gene expression and quantitative metabolomics with mathematical modeling to identify strategies for increasing production yields of nutritionally important carotenoids in the seed endosperm synthesized through alternative biosynthetic pathways in synthetic lines of white maize, which is normally devoid of carotenoids. Quantitative metabolomics and gene expression data are used to create and fit parameters of mathematical models that are specific to four independent maize lines. Sensitivity analysis and simulation of each model is used to predict which gene activities should be further engineered in order to increase production yields for carotenoid accumulation in each line. Some of these predictions (e.g. increasing Zmlycb/Gllycb will increase accumulated β-carotenes) are valid across the four maize lines and consistent with experimental observations in other systems. Other predictions are line specific. The workflow is adaptable to any other biological system for which appropriate quantitative information is available. Furthermore, we validate some of the predictions using experimental data from additional synthetic maize lines for which no models were developed. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Castagnetti, C.; Dubbini, M.; Ricci, P. C.; Rivola, R.; Giannini, M.; Capra, A.
2017-05-01
The new era of designing in architecture and civil engineering applications lies in the Building Information Modeling (BIM) approach, based on a 3D geometric model including a 3D database. This is easier for new constructions whereas, when dealing with existing buildings, the creation of the BIM is based on the accurate knowledge of the as-built construction. Such a condition is allowed by a 3D survey, often carried out with laser scanning technology or modern photogrammetry, which are able to guarantee an adequate points cloud in terms of resolution and completeness by balancing both time consuming and costs with respect to the request of final accuracy. The BIM approach for existing buildings and even more for historical buildings is not yet a well known and deeply discussed process. There are still several choices to be addressed in the process from the survey to the model and critical issues to be discussed in the modeling step, particularly when dealing with unconventional elements such as deformed geometries or historical elements. The paper describes a comprehensive workflow that goes through the survey and the modeling, allowing to focus on critical issues and key points to obtain a reliable BIM of an existing monument. The case study employed to illustrate the workflow is the Basilica of St. Stefano in Bologna (Italy), a large monumental complex with great religious, historical and architectural assets.
Measuring the Impact of Technology on Nurse Workflow: A Mixed Methods Approach
ERIC Educational Resources Information Center
Cady, Rhonda Guse
2012-01-01
Background. Investment in health information technology (HIT) is rapidly accelerating. The absence of contextual or situational analysis of the environment in which HIT is incorporated makes it difficult to measure success or failure. The methodology introduced in this paper combines observational research with time-motion study to measure the…
Siebert, Stefan; Robinson, Mark D; Tintori, Sophia C; Goetz, Freya; Helm, Rebecca R; Smith, Stephen A; Shaner, Nathan; Haddock, Steven H D; Dunn, Casey W
2011-01-01
We investigated differential gene expression between functionally specialized feeding polyps and swimming medusae in the siphonophore Nanomia bijuga (Cnidaria) with a hybrid long-read/short-read sequencing strategy. We assembled a set of partial gene reference sequences from long-read data (Roche 454), and generated short-read sequences from replicated tissue samples that were mapped to the references to quantify expression. We collected and compared expression data with three short-read expression workflows that differ in sample preparation, sequencing technology, and mapping tools. These workflows were Illumina mRNA-Seq, which generates sequence reads from random locations along each transcript, and two tag-based approaches, SOLiD SAGE and Helicos DGE, which generate reads from particular tag sites. Differences in expression results across workflows were mostly due to the differential impact of missing data in the partial reference sequences. When all 454-derived gene reference sequences were considered, Illumina mRNA-Seq detected more than twice as many differentially expressed (DE) reference sequences as the tag-based workflows. This discrepancy was largely due to missing tag sites in the partial reference that led to false negatives in the tag-based workflows. When only the subset of reference sequences that unambiguously have tag sites was considered, we found broad congruence across workflows, and they all identified a similar set of DE sequences. Our results are promising in several regards for gene expression studies in non-model organisms. First, we demonstrate that a hybrid long-read/short-read sequencing strategy is an effective way to collect gene expression data when an annotated genome sequence is not available. Second, our replicated sampling indicates that expression profiles are highly consistent across field-collected animals in this case. Third, the impacts of partial reference sequences on the ability to detect DE can be mitigated through workflow choice and deeper reference sequencing.
Siebert, Stefan; Robinson, Mark D.; Tintori, Sophia C.; Goetz, Freya; Helm, Rebecca R.; Smith, Stephen A.; Shaner, Nathan; Haddock, Steven H. D.; Dunn, Casey W.
2011-01-01
We investigated differential gene expression between functionally specialized feeding polyps and swimming medusae in the siphonophore Nanomia bijuga (Cnidaria) with a hybrid long-read/short-read sequencing strategy. We assembled a set of partial gene reference sequences from long-read data (Roche 454), and generated short-read sequences from replicated tissue samples that were mapped to the references to quantify expression. We collected and compared expression data with three short-read expression workflows that differ in sample preparation, sequencing technology, and mapping tools. These workflows were Illumina mRNA-Seq, which generates sequence reads from random locations along each transcript, and two tag-based approaches, SOLiD SAGE and Helicos DGE, which generate reads from particular tag sites. Differences in expression results across workflows were mostly due to the differential impact of missing data in the partial reference sequences. When all 454-derived gene reference sequences were considered, Illumina mRNA-Seq detected more than twice as many differentially expressed (DE) reference sequences as the tag-based workflows. This discrepancy was largely due to missing tag sites in the partial reference that led to false negatives in the tag-based workflows. When only the subset of reference sequences that unambiguously have tag sites was considered, we found broad congruence across workflows, and they all identified a similar set of DE sequences. Our results are promising in several regards for gene expression studies in non-model organisms. First, we demonstrate that a hybrid long-read/short-read sequencing strategy is an effective way to collect gene expression data when an annotated genome sequence is not available. Second, our replicated sampling indicates that expression profiles are highly consistent across field-collected animals in this case. Third, the impacts of partial reference sequences on the ability to detect DE can be mitigated through workflow choice and deeper reference sequencing. PMID:21829563
NASA Astrophysics Data System (ADS)
Erickson, T.
2016-12-01
Deriving actionable information from Earth observation data obtained from sensors or models can be quite complicated, and sharing those insights with others in a form that they can understand, reproduce, and improve upon is equally difficult. Journal articles, even if digital, commonly present just a summary of an analysis that cannot be understood in depth or reproduced without major effort on the part of the reader. Here we show a method of improving scientific literacy by pairing a recently developed scientific presentation technology (Jupyter Notebooks) with a petabyte-scale platform for accessing and analyzing Earth observation and model data (Google Earth Engine). Jupyter Notebooks are interactive web documents that mix live code with annotations such as rich-text markup, equations, images, videos, hyperlinks and dynamic output. Notebooks were first introduced as part of the IPython project in 2011, and have since gained wide acceptance in the scientific programming community, initially among Python programmers but later by a wide range of scientific programming languages. While Jupyter Notebooks have been widely adopted for general data analysis, data visualization, and machine learning, to date there have been relatively few examples of using Jupyter Notebooks to analyze geospatial datasets. Google Earth Engine is cloud-based platform for analyzing geospatial data, such as satellite remote sensing imagery and/or Earth system model output. Through its Python API, Earth Engine makes petabytes of Earth observation data accessible, and provides hundreds of algorithmic building blocks that can be chained together to produce high-level algorithms and outputs in real-time. We anticipate that this technology pairing will facilitate a better way of creating, documenting, and sharing complex analyses that derive information on our Earth that can be used to promote broader understanding of the complex issues that it faces. http://jupyter.orghttps://earthengine.google.com
A Bioinformatics Workflow for Variant Peptide Detection in Shotgun Proteomics*
Li, Jing; Su, Zengliu; Ma, Ze-Qiang; Slebos, Robbert J. C.; Halvey, Patrick; Tabb, David L.; Liebler, Daniel C.; Pao, William; Zhang, Bing
2011-01-01
Shotgun proteomics data analysis usually relies on database search. However, commonly used protein sequence databases do not contain information on protein variants and thus prevent variant peptides and proteins from been identified. Including known coding variations into protein sequence databases could help alleviate this problem. Based on our recently published human Cancer Proteome Variation Database, we have created a protein sequence database that comprehensively annotates thousands of cancer-related coding variants collected in the Cancer Proteome Variation Database as well as noncancer-specific ones from the Single Nucleotide Polymorphism Database (dbSNP). Using this database, we then developed a data analysis workflow for variant peptide identification in shotgun proteomics. The high risk of false positive variant identifications was addressed by a modified false discovery rate estimation method. Analysis of colorectal cancer cell lines SW480, RKO, and HCT-116 revealed a total of 81 peptides that contain either noncancer-specific or cancer-related variations. Twenty-three out of 26 variants randomly selected from the 81 were confirmed by genomic sequencing. We further applied the workflow on data sets from three individual colorectal tumor specimens. A total of 204 distinct variant peptides were detected, and five carried known cancer-related mutations. Each individual showed a specific pattern of cancer-related mutations, suggesting potential use of this type of information for personalized medicine. Compatibility of the workflow has been tested with four popular database search engines including Sequest, Mascot, X!Tandem, and MyriMatch. In summary, we have developed a workflow that effectively uses existing genomic data to enable variant peptide detection in proteomics. PMID:21389108
Planning bioinformatics workflows using an expert system.
Chen, Xiaoling; Chang, Jeffrey T
2017-04-15
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Planning bioinformatics workflows using an expert system
Chen, Xiaoling; Chang, Jeffrey T.
2017-01-01
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928
qPortal: A platform for data-driven biomedical research.
Mohr, Christopher; Friedrich, Andreas; Wojnar, David; Kenar, Erhan; Polatkan, Aydin Can; Codrea, Marius Cosmin; Czemmel, Stefan; Kohlbacher, Oliver; Nahnsen, Sven
2018-01-01
Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.
Hans, Parminder K; Gray, Carolyn Steele; Gill, Ashlinder; Tiessen, James
2018-03-01
Aim This qualitative study investigates how the Electronic Patient-Reported Outcome (ePRO) mobile application and portal system, designed to capture patient-reported measures to support self-management, affected primary care provider workflows. The Canadian health system is facing an ageing population that is living with chronic disease. Disruptive innovations like mobile health technologies can help to support health system transformation needed to better meet the multifaceted needs of the complex care patient. However, there are challenges with implementing these technologies in primary care settings, in particular the effect on primary care provider workflows. Over a six-week period interdisciplinary primary care providers (n=6) and their complex care patients (n=12), used the ePRO mobile application and portal to collaboratively goal-set, manage care plans, and support self-management using patient-reported measures. Secondary thematic analysis of focus groups, training sessions, and issue tracker reports captured user experiences at a Toronto area Family Health Team from October 2014 to January 2015. Findings Key issues raised by providers included: liability concerns associated with remote monitoring, increased documentation activities due to a lack of interoperability between the app and the electronic patient record, increased provider anxiety with regard to the potential for the app to disrupt and infringe upon appointment time, and increased demands for patient engagement. Primary care providers reported the app helped to focus care plans and to begin a collaborative conversation on goal-setting. However, throughout our investigation we found a high level of provider resistance evidenced by consistent attempts to shift the app towards fitting with existing workflows rather than adapting much of their behaviour. As health systems seek innovative and disruptive models to better serve this complex patient population, provider change resistance will need to be addressed. New models and technologies cannot be disruptive in an environment that is resisting change.
The GridEcon Platform: A Business Scenario Testbed for Commercial Cloud Services
NASA Astrophysics Data System (ADS)
Risch, Marcel; Altmann, Jörn; Guo, Li; Fleming, Alan; Courcoubetis, Costas
Within this paper, we present the GridEcon Platform, a testbed for designing and evaluating economics-aware services in a commercial Cloud computing setting. The Platform is based on the idea that the exact working of such services is difficult to predict in the context of a market and, therefore, an environment for evaluating its behavior in an emulated market is needed. To identify the components of the GridEcon Platform, a number of economics-aware services and their interactions have been envisioned. The two most important components of the platform are the Marketplace and the Workflow Engine. The Workflow Engine allows the simple composition of a market environment by describing the service interactions between economics-aware services. The Marketplace allows trading goods using different market mechanisms. The capabilities of these components of the GridEcon Platform in conjunction with the economics-aware services are described in this paper in detail. The validation of an implemented market mechanism and a capacity planning service using the GridEcon Platform also demonstrated the usefulness of the GridEcon Platform.
Halász, László; Karányi, Zsolt; Boros-Oláh, Beáta; Kuik-Rózsa, Tímea; Sipos, Éva; Nagy, Éva; Mosolygó-L, Ágnes; Mázló, Anett; Rajnavölgyi, Éva; Halmos, Gábor; Székvölgyi, Lóránt
2017-01-01
The impact of R-loops on the physiology and pathology of chromosomes has been demonstrated extensively by chromatin biology research. The progress in this field has been driven by technological advancement of R-loop mapping methods that largely relied on a single approach, DNA-RNA immunoprecipitation (DRIP). Most of the DRIP protocols use the experimental design that was developed by a few laboratories, without paying attention to the potential caveats that might affect the outcome of RNA-DNA hybrid mapping. To assess the accuracy and utility of this technology, we pursued an analytical approach to estimate inherent biases and errors in the DRIP protocol. By performing DRIP-sequencing, qPCR, and receiver operator characteristic (ROC) analysis, we tested the effect of formaldehyde fixation, cell lysis temperature, mode of genome fragmentation, and removal of free RNA on the efficacy of RNA-DNA hybrid detection and implemented workflows that were able to distinguish complex and weak DRIP signals in a noisy background with high confidence. We also show that some of the workflows perform poorly and generate random answers. Furthermore, we found that the most commonly used genome fragmentation method (restriction enzyme digestion) led to the overrepresentation of lengthy DRIP fragments over coding ORFs, and this bias was enhanced at the first exons. Biased genome sampling severely compromised mapping resolution and prevented the assignment of precise biological function to a significant fraction of R-loops. The revised workflow presented herein is established and optimized using objective ROC analyses and provides reproducible and highly specific RNA-DNA hybrid detection. PMID:28341774
METAPHOR: Probability density estimation for machine learning based photometric redshifts
NASA Astrophysics Data System (ADS)
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
Model Checking for Verification of Interactive Health IT Systems
Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui
2015-01-01
Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166
[Change in process management by implementing RIS, PACS and flat-panel detectors].
Imhof, H; Dirisamer, A; Fischer, H; Grampp, S; Heiner, L; Kaderk, M; Krestan, C; Kainberger, F
2002-05-01
Implementation of radiological information systems (RIS) and picture archiving and communicating systems (PACS) results in significant changes of workflow in a radiological department. Additional connection with flat-panel detectors leads to a shortening of the work process. RIS and PACS implementation alone reduces the complete workflow by 21-80%. With flatpanel technology the image production process is further shortened by 25-30%. The workflow-steps are changed from original 17-12 with the implementation of RIS and PACS and to 5 with the integrated use of flatpanels. This clearly recognizable advantages in the workflow need an according financial investment. Several studies could show that the capitalisation-factor calculated over eight years is positive, with a gain range between 5-25%. Whether the additional implementation of flatpanel detectors results also in a positive capitalisation over the years, cannot be estimated exactly, at the moment, because the experiences are too short. Particularly critical are the interfaces, which needs a constant quality control. Our flatpanel detector-system is fixed, special images--as we have them in about 3-5% of all cases--need still conventional filmscreen or phosphorplate-systems. Full-spine and long-leg examinations cannot be performed with sufficient exactness. Without any questions implementation of integrated RIS, PACS and flatpanel detector-system needs excellent training of the employees, because of the changes in workflow etc. The main profits of such an integrated implementation are an increase in quality in image and report datas, easier handling--there are almost no more cassettes necessary--and excessive shortening of workflow.
Ren, Hengqian; Hu, Pingfan; Zhao, Huimin
2017-08-01
Pathway refactoring serves as an invaluable synthetic biology tool for natural product discovery, characterization, and engineering. However, the complicated and laborious molecular biology techniques largely hinder its application in natural product research, especially in a high-throughput manner. Here we report a plug-and-play pathway refactoring workflow for high-throughput, flexible pathway construction, and expression in both Escherichia coli and Saccharomyces cerevisiae. Biosynthetic genes were firstly cloned into pre-assembled helper plasmids with promoters and terminators, resulting in a series of expression cassettes. These expression cassettes were further assembled using Golden Gate reaction to generate fully refactored pathways. The inclusion of spacer plasmids in this system would not only increase the flexibility for refactoring pathways with different number of genes, but also facilitate gene deletion and replacement. As proof of concept, a total of 96 pathways for combinatorial carotenoid biosynthesis were built successfully. This workflow should be generally applicable to different classes of natural products produced by various organisms. Biotechnol. Bioeng. 2017;114: 1847-1854. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Zhou, Mowei; Paša-Tolić, Ljiljana; Stenoien, David L
2017-02-03
As histones play central roles in most chromosomal functions including regulation of DNA replication, DNA damage repair, and gene transcription, both their basic biology and their roles in disease development have been the subject of intense study. Because multiple post-translational modifications (PTMs) along the entire protein sequence are potential regulators of histones, a top-down approach, where intact proteins are analyzed, is ultimately required for complete characterization of proteoforms. However, significant challenges remain for top-down histone analysis primarily because of deficiencies in separation/resolving power and effective identification algorithms. Here we used state-of-the-art mass spectrometry and a bioinformatics workflow for targeted data analysis and visualization. The workflow uses ProMex for intact mass deconvolution, MSPathFinder as a search engine, and LcMsSpectator as a data visualization tool. When complemented with the open-modification tool TopPIC, this workflow enabled identification of novel histone PTMs including tyrosine bromination on histone H4 and H2A, H3 glutathionylation, and mapping of conventional PTMs along the entire protein for many histone subunits.
Advanced Combustion Numerics and Modeling - FY18 First Quarter Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitesides, R. A.; Killingsworth, N. J.; McNenly, M. J.
This project is focused on early stage research and development of numerical methods and models to improve advanced engine combustion concepts and systems. The current focus is on development of new mathematics and algorithms to reduce the time to solution for advanced combustion engine design using detailed fuel chemistry. The research is prioritized towards the most time-consuming workflow bottlenecks (computer and human) and accuracy gaps that slow ACS program members. Zero-RK, the fast and accurate chemical kinetics solver software developed in this project, is central to the research efforts and continues to be developed to address the current and emergingmore » needs of the engine designers, engine modelers and fuel mechanism developers.« less
Orton, Dennis J.; Doucette, Alan A.
2013-01-01
Identification of biomarkers capable of differentiating between pathophysiological states of an individual is a laudable goal in the field of proteomics. Protein biomarker discovery generally employs high throughput sample characterization by mass spectrometry (MS), being capable of identifying and quantifying thousands of proteins per sample. While MS-based technologies have rapidly matured, the identification of truly informative biomarkers remains elusive, with only a handful of clinically applicable tests stemming from proteomic workflows. This underlying lack of progress is attributed in large part to erroneous experimental design, biased sample handling, as well as improper statistical analysis of the resulting data. This review will discuss in detail the importance of experimental design and provide some insight into the overall workflow required for biomarker identification experiments. Proper balance between the degree of biological vs. technical replication is required for confident biomarker identification. PMID:28250400
Enhanced reproducibility of SADI web service workflows with Galaxy and Docker.
Aranguren, Mikel Egaña; Wilkinson, Mark D
2015-01-01
Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns.
Riposan, Adina; Taylor, Ian; Owens, David R; Rana, Omer; Conley, Edward C
2007-01-01
In this paper we present mechanisms for imaging and spectral data discovery, as applied to the early detection of pathologic mechanisms underlying diabetic retinopathy in research and clinical trial scenarios. We discuss the Alchemist framework, built using a generic peer-to-peer architecture, supporting distributed database queries and complex search algorithms based on workflow. The Alchemist is a domain-independent search mechanism that can be applied to search and data discovery scenarios in many areas. We illustrate Alchemist's ability to perform complex searches composed as a collection of peer-to-peer overlays, Grid-based services and workflows, e.g. applied to image and spectral data discovery, as applied to the early detection and prevention of retinal disease and investigational drug discovery. The Alchemist framework is built on top of decentralised technologies and uses industry standards such as Web services and SOAP for messaging.
BioPartsDB: a synthetic biology workflow web-application for education and research.
Stracquadanio, Giovanni; Yang, Kun; Boeke, Jef D; Bader, Joel S
2016-11-15
Synthetic biology has become a widely used technology, and expanding applications in research, education and industry require progress tracking for team-based DNA synthesis projects. Although some vendors are beginning to supply multi-kilobase sequence-verified constructs, synthesis workflows starting with short oligos remain important for cost savings and pedagogical benefit. We developed BioPartsDB as an open source, extendable workflow management system for synthetic biology projects with entry points for oligos and larger DNA constructs and ending with sequence-verified clones. BioPartsDB is released under the MIT license and available for download at https://github.com/baderzone/biopartsdb Additional documentation and video tutorials are available at https://github.com/baderzone/biopartsdb/wiki An Amazon Web Services image is available from the AWS Market Place (ami-a01d07c8). joel.bader@jhu.edu. © The Author 2016. Published by Oxford University Press.
Integrating pathology and radiology disciplines: an emerging opportunity?
2012-01-01
Pathology and radiology form the core of cancer diagnosis, yet the workflows of both specialties remain ad hoc and occur in separate "silos," with no direct linkage between their case accessioning and/or reporting systems, even when both departments belong to the same host institution. Because both radiologists' and pathologists' data are essential to making correct diagnoses and appropriate patient management and treatment decisions, this isolation of radiology and pathology workflows can be detrimental to the quality and outcomes of patient care. These detrimental effects underscore the need for pathology and radiology workflow integration and for systems that facilitate the synthesis of all data produced by both specialties. With the enormous technological advances currently occurring in both fields, the opportunity has emerged to develop an integrated diagnostic reporting system that supports both specialties and, therefore, improves the overall quality of patient care. PMID:22950414
Dispel4py: An Open-Source Python library for Data-Intensive Seismology
NASA Astrophysics Data System (ADS)
Filgueira, Rosa; Krause, Amrey; Spinuso, Alessandro; Klampanos, Iraklis; Danecek, Peter; Atkinson, Malcolm
2015-04-01
Scientific workflows are a necessary tool for many scientific communities as they enable easy composition and execution of applications on computing resources while scientists can focus on their research without being distracted by the computation management. Nowadays, scientific communities (e.g. Seismology) have access to a large variety of computing resources and their computational problems are best addressed using parallel computing technology. However, successful use of these technologies requires a lot of additional machinery whose use is not straightforward for non-experts: different parallel frameworks (MPI, Storm, multiprocessing, etc.) must be used depending on the computing resources (local machines, grids, clouds, clusters) where applications are run. This implies that for achieving the best applications' performance, users usually have to change their codes depending on the features of the platform selected for running them. This work presents dispel4py, a new open-source Python library for describing abstract stream-based workflows for distributed data-intensive applications. Special care has been taken to provide dispel4py with the ability to map abstract workflows to different platforms dynamically at run-time. Currently dispel4py has four mappings: Apache Storm, MPI, multi-threading and sequential. The main goal of dispel4py is to provide an easy-to-use tool to develop and test workflows in local resources by using the sequential mode with a small dataset. Later, once a workflow is ready for long runs, it can be automatically executed on different parallel resources. dispel4py takes care of the underlying mappings by performing an efficient parallelisation. Processing Elements (PE) represent the basic computational activities of any dispel4Py workflow, which can be a seismologic algorithm, or a data transformation process. For creating a dispel4py workflow, users only have to write very few lines of code to describe their PEs and how they are connected by using Python, which is widely supported on many platforms and is popular in many scientific domains, such as in geosciences. Once, a dispel4py workflow is written, a user only has to select which mapping they would like to use, and everything else (parallelisation, distribution of data) is carried on by dispel4py without any cost to the user. Among all dispel4py features we would like to highlight the following: * The PEs are connected by streams and not by writing to and reading from intermediate files, avoiding many IO operations. * The PEs can be stored into a registry. Therefore, different users can recombine PEs in many different workflows. * dispel4py has been enriched with a provenance mechanism to support runtime provenance analysis. We have adopted the W3C-PROV data model, which is accessible via a prototypal browser-based user interface and a web API. It supports the users with the visualisation of graphical products and offers combined operations to access and download the data, which may be selectively stored at runtime, into dedicated data archives. dispel4py has been already used by seismologists in the VERCE project to develop different seismic workflows. One of them is the Seismic Ambient Noise Cross-Correlation workflow, which preprocesses and cross-correlates traces from several stations. First, this workflow was tested on a local machine by using a small number of stations as input data. Later, it was executed on different parallel platforms (SuperMUC cluster, and Terracorrelator machine), automatically scaling up by using MPI and multiprocessing mappings and up to 1000 stations as input data. The results show that the dispel4py achieves scalable performance in both mappings tested on different parallel platforms.
Methylation Integration (Mint) | Informatics Technology for Cancer Research (ITCR)
A comprehensive software pipeline and set of Galaxy tools/workflows for integrative analysis of genome-wide DNA methylation and hydroxymethylation data. Data types can be either bisulfite sequencing and/or pull-down methods.
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
Ehrler, Frederic; Ducloux, Pascal; Wu, Danny T Y; Lovis, Christian; Blondon, Katherine
2018-01-01
Supporting caregivers' workflow with mobile applications (apps) is a growing trend. At the bedside, apps can provide new ways to support the documentation process rather than using a desktop computer in a nursing office. Although these applications show potential, few existing reports have studied the real impact of such solutions. At the University Hospitals of Geneva, we developed BEDside Mobility, a mobile application supporting nurses' daily workflow. In a pilot study, the app was trialed in two wards for a period of one month. We collected data of the actual usage of the app and asked the users to complete a tailored technology acceptance model questionnaire at the end of the study period. Results show that participation remain stable with time with participants using in average the tool for almost 29 minutes per day. The technology acceptance questionnaires revealed a high usability of the app and good promotion from the institution although users did not perceive any increase in productivity. Overall, intent of use was divergent between promoters and antagonist. Furthermore, some participants considered the tool as an addition to their workload. This evaluation underlines the importance of helping all end users perceive the benefits of a new intervention since coworkers strong influence each other.
Increasingly mobile: How new technologies can enhance qualitative research
Moylan, Carrie Ann; Derr, Amelia Seraphia; Lindhorst, Taryn
2015-01-01
Advances in technology, such as the growth of smart phones, tablet computing, and improved access to the internet have resulted in many new tools and applications designed to increase efficiency and improve workflow. Some of these tools will assist scholars using qualitative methods with their research processes. We describe emerging technologies for use in data collection, analysis, and dissemination that each offer enhancements to existing research processes. Suggestions for keeping pace with the ever-evolving technological landscape are also offered. PMID:25798072
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2016-01-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contributes to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), that enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the following iterations. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. PMID:26419769
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W; Moritz, Robert L
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website.
Li, Jian; Batcha, Aarif Mohamed Nazeer; Grüning, Björn; Mansmann, Ulrich R.
2015-01-01
Next-generation sequencing (NGS) technologies that have advanced rapidly in the past few years possess the potential to classify diseases, decipher the molecular code of related cell processes, identify targets for decision-making on targeted therapy or prevention strategies, and predict clinical treatment response. Thus, NGS is on its way to revolutionize oncology. With the help of NGS, we can draw a finer map for the genetic basis of diseases and can improve our understanding of diagnostic and prognostic applications and therapeutic methods. Despite these advantages and its potential, NGS is facing several critical challenges, including reduction of sequencing cost, enhancement of sequencing quality, improvement of technical simplicity and reliability, and development of semiautomated and integrated analysis workflow. In order to address these challenges, we conducted a literature research and summarized a four-stage NGS workflow for providing a systematic review on NGS-based analysis, explaining the strength and weakness of diverse NGS-based software tools, and elucidating its potential connection to individualized medicine. By presenting this four-stage NGS workflow, we try to provide a minimal structural layout required for NGS data storage and reproducibility. PMID:27081306
Managing the CMS Data and Monte Carlo Processing during LHC Run 2
NASA Astrophysics Data System (ADS)
Wissing, C.;
2017-10-01
In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.
Managing bioengineering complexity with AI techniques.
Beal, Jacob; Adler, Aaron; Yaman, Fusun
2016-10-01
Our capabilities for systematic design and engineering of biological systems are rapidly increasing. Effectively engineering such systems, however, requires the synthesis of a rapidly expanding and changing complex body of knowledge, protocols, and methodologies. Many of the problems in managing this complexity, however, appear susceptible to being addressed by artificial intelligence (AI) techniques, i.e., methods enabling computers to represent, acquire, and employ knowledge. Such methods can be employed to automate physical and informational "routine" work and thus better allow humans to focus their attention on the deeper scientific and engineering issues. This paper examines the potential impact of AI on the engineering of biological organisms through the lens of a typical organism engineering workflow. We identify a number of key opportunities for significant impact, as well as challenges that must be overcome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Computational Tools for Metabolic Engineering
Copeland, Wilbert B.; Bartley, Bryan A.; Chandran, Deepak; Galdzicki, Michal; Kim, Kyung H.; Sleight, Sean C.; Maranas, Costas D.; Sauro, Herbert M.
2012-01-01
A great variety of software applications are now employed in the metabolic engineering field. These applications have been created to support a wide range of experimental and analysis techniques. Computational tools are utilized throughout the metabolic engineering workflow to extract and interpret relevant information from large data sets, to present complex models in a more manageable form, and to propose efficient network design strategies. In this review, we present a number of tools that can assist in modifying and understanding cellular metabolic networks. The review covers seven areas of relevance to metabolic engineers. These include metabolic reconstruction efforts, network visualization, nucleic acid and protein engineering, metabolic flux analysis, pathway prospecting, post-structural network analysis and culture optimization. The list of available tools is extensive and we can only highlight a small, representative portion of the tools from each area. PMID:22629572
Review of Use and Integration of Mobile Apps Into Psychiatric Treatments.
Chan, Steven; Godwin, Haley; Gonzalez, Alvaro; Yellowlees, Peter M; Hilty, Donald M
2017-10-30
Mental health practitioners should understand the features of current, publicly available apps; the features of novel, research apps; and issues behind the integration of mobile apps and digital health services into clinical workflows. The review is based on a research literature and the authors' clinical and healthcare administration experiences. Articles searched-on telepsychiatry, telemental health, mobile mental health, informatics, cellular phone, ambulatory monitoring, telemetry, and algorithms-were restricted to 2016 and 2017. Technologies are used in a variety of clinical settings, including patients with varying mental illness severity, social supports, and technological literacy. Good practices for evaluating apps, understanding user needs, and training and educating users can increase success rates. Ethics and risk management should be considered. Mobile apps are versatile. Integrating apps into psychiatric treatment requires addressing both patient and clinical workflows, design and usability principles, accessibility, social concerns, and digital health literacy.
ERIC Educational Resources Information Center
Muthukumar; Hedberg, John G.
2005-01-01
There is growing recognition that the economic climate of the world is shifting towards a knowledge-based economy where knowledge will be cherished as the most prized asset. In this regard, technology can be leveraged as a useful tool in effectually managing the knowledge capital of an organisation. Although several research studies have advanced…
ERIC Educational Resources Information Center
Muthukumar; Hedberg, John G.
2005-01-01
There is growing recognition that the economic climate of the world is shifting towards a knowledge-based economy where knowledge will be cherished as the most prized asset. In this regard, technology can be leveraged as a useful tool in effectually managing the knowledge capital of an organisation. Although several research studies have advanced…
ERIC Educational Resources Information Center
Winslow, Joe; Dickerson, Jeremy; Lee, Cheng-Yuan; Geer, Gregory
2012-01-01
This paper reports findings from an evaluation of a district-wide initiative deploying iPads to school administrators (principals) to improve workflow efficiencies and promote technology leadership self-efficacy. The findings indicate that iPad utilization not only facilitated administrative tasks (memos, calendars, etc.), but also improved…
Kolowitz, Brian J; Lauro, Gonzalo Romero; Venturella, James; Georgiev, Veliyan; Barone, Michael; Deible, Christopher; Shrestha, Rasu
2014-04-01
The adoption of social media technologies appears to enhance clinical outcomes through improved communications as reported by Bacigalupe (Fam Syst Heal 29(1):1-14, 2011). The ability of providers to more effectively, directly, and rapidly communicate among themselves as well as with patients should strengthen collaboration and treatment as reported by Bacigalupe (Fam Syst Heal 29(1):1-14, 2011). This paper is a case study in one organization's development of an internally designed and developed social technology solution termed "Unite." The Unite system combines social technologies' features including push notifications, messaging, community groups, and user lists with clinical workflow and applications to construct dynamic provider networks, simplify communications, and facilitate clinical workflow optimization. Modeling Unite as a social technology may ease adoption barriers. Developing a social network that is integrated with healthcare information systems in the clinical space opens the doors to capturing and studying the way in which providers communicate. The Unite system appears to have the potential to breaking down existing communication paradigms. With Unite, a rich set of usage data tied to clinical events may unravel alternative networks that can be leveraged to advance patient care.
Bialecki, Brian; Park, James; Tilkin, Mike
2016-08-01
The intent of this project was to use object storage and its database, which has the ability to add custom extensible metadata to an imaging object being stored within the system, to harness the power of its search capabilities, and to close the technology gap that healthcare faces. This creates a non-disruptive tool that can be used natively by both legacy systems and the healthcare systems of today which leverage more advanced storage technologies. The base infrastructure can be populated alongside current workflows without any interruption to the delivery of services. In certain use cases, this technology can be seen as a true alternative to the VNA (Vendor Neutral Archive) systems implemented by healthcare today. The scalability, security, and ability to process complex objects makes this more than just storage for image data and a commodity to be consumed by PACS (Picture Archiving and Communication System) and workstations. Object storage is a smart technology that can be leveraged to create vendor independence, standards compliance, and a data repository that can be mined for truly relevant content by adding additional context to search capabilities. This functionality can lead to efficiencies in workflow and a wealth of minable data to improve outcomes into the future.
Mehdipoor, Hamed; Zurita-Milla, Raul; Rosemartin, Alyssa; Gerst, Katharine L.; Weltzin, Jake F.
2015-01-01
Recent improvements in online information communication and mobile location-aware technologies have led to the production of large volumes of volunteered geographic information. Widespread, large-scale efforts by volunteers to collect data can inform and drive scientific advances in diverse fields, including ecology and climatology. Traditional workflows to check the quality of such volunteered information can be costly and time consuming as they heavily rely on human interventions. However, identifying factors that can influence data quality, such as inconsistency, is crucial when these data are used in modeling and decision-making frameworks. Recently developed workflows use simple statistical approaches that assume that the majority of the information is consistent. However, this assumption is not generalizable, and ignores underlying geographic and environmental contextual variability that may explain apparent inconsistencies. Here we describe an automated workflow to check inconsistency based on the availability of contextual environmental information for sampling locations. The workflow consists of three steps: (1) dimensionality reduction to facilitate further analysis and interpretation of results, (2) model-based clustering to group observations according to their contextual conditions, and (3) identification of inconsistent observations within each cluster. The workflow was applied to volunteered observations of flowering in common and cloned lilac plants (Syringa vulgaris and Syringa x chinensis) in the United States for the period 1980 to 2013. About 97% of the observations for both common and cloned lilacs were flagged as consistent, indicating that volunteers provided reliable information for this case study. Relative to the original dataset, the exclusion of inconsistent observations changed the apparent rate of change in lilac bloom dates by two days per decade, indicating the importance of inconsistency checking as a key step in data quality assessment for volunteered geographic information. Initiatives that leverage volunteered geographic information can adapt this workflow to improve the quality of their datasets and the robustness of their scientific analyses.
Evaluating the benefits of digital pathology implementation: Time savings in laboratory logistics.
Baidoshvili, Alexi; Bucur, Anca; van Leeuwen, Jasper; van der Laak, Jeroen; Kluin, Philip; van Diest, Paul J
2018-06-20
The benefits of digital pathology for workflow improvement and thereby cost savings in pathology, at least partly outweighing investment costs, are increasingly recognized. Successful implementations in a variety of scenarios start to demonstrate cost benefits of digital pathology for both research and routine diagnostics, contributing to a sound business case encouraging further adoption. To further support new adopters, there is still a need for detailed assessment of the impact this technology has on the relevant pathology workflows with emphasis on time saving. To assess the impact of digital pathology adoption on logistic laboratory tasks (i.e. not including pathologists' time for diagnosis making) in LabPON, a large regional pathology laboratory in The Netherlands. To quantify the benefits of digitization we analyzed the differences between the traditional analog and new digital workflows, carried out detailed measurements of all relevant steps in key analog and digital processes, and compared time spent. We modeled and assessed the logistic savings in five workflows: (1) Routine diagnosis, (2) Multi-disciplinary meeting, (3) External revision requests, (4) Extra stainings and (5) External consultation. On average over 19 working hours were saved on a typical day by working digitally, with the highest savings in routine diagnosis and multi-disciplinary meeting workflows. By working digitally, a significant amount of time could be saved in a large regional pathology lab with a typical case mix. We also present the data in each workflow per task and concrete logistic steps to allow extrapolation to the context and case mix of other laboratories. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheu, R; Ghafar, R; Powers, A
Purpose: Demonstrate the effectiveness of in-house software in ensuring EMR workflow efficiency and safety. Methods: A web-based dashboard system (WBDS) was developed to monitor clinical workflow in real time using web technology (WAMP) through ODBC (Open Database Connectivity). Within Mosaiq (Elekta Inc), operational workflow is driven and indicated by Quality Check Lists (QCLs), which is triggered by automation software IQ Scripts (Elekta Inc); QCLs rely on user completion to propagate. The WBDS retrieves data directly from the Mosaig SQL database and tracks clinical events in real time. For example, the necessity of a physics initial chart check can be determinedmore » by screening all patients on treatment who have received their first fraction and who have not yet had their first chart check. Monitoring similar “real” events with our in-house software creates a safety net as its propagation does not rely on individual users input. Results: The WBDS monitors the following: patient care workflow (initial consult to end of treatment), daily treatment consistency (scheduling, technique, charges), physics chart checks (initial, EOT, weekly), new starts, missing treatments (>3 warning/>5 fractions, action required), and machine overrides. The WBDS can be launched from any web browser which allows the end user complete transparency and timely information. Since the creation of the dashboards, workflow interruptions due to accidental deletion or completion of QCLs were eliminated. Additionally, all physics chart checks were completed timely. Prompt notifications of treatment record inconsistency and machine overrides have decreased the amount of time between occurrence and execution of corrective action. Conclusion: Our clinical workflow relies primarily on QCLs and IQ Scripts; however, this functionality is not the panacea of safety and efficiency. The WBDS creates a more thorough system of checks to provide a safer and near error-less working environment.« less
Spjuth, Ola; Karlsson, Andreas; Clements, Mark; Humphreys, Keith; Ivansson, Emma; Dowling, Jim; Eklund, Martin; Jauhiainen, Alexandra; Czene, Kamila; Grönberg, Henrik; Sparén, Pär; Wiklund, Fredrik; Cheddad, Abbas; Pálsdóttir, Þorgerður; Rantalainen, Mattias; Abrahamsson, Linda; Laure, Erwin; Litton, Jan-Eric; Palmgren, Juni
2017-09-01
We provide an e-Science perspective on the workflow from risk factor discovery and classification of disease to evaluation of personalized intervention programs. As case studies, we use personalized prostate and breast cancer screenings. We describe an e-Science initiative in Sweden, e-Science for Cancer Prevention and Control (eCPC), which supports biomarker discovery and offers decision support for personalized intervention strategies. The generic eCPC contribution is a workflow with 4 nodes applied iteratively, and the concept of e-Science signifies systematic use of tools from the mathematical, statistical, data, and computer sciences. The eCPC workflow is illustrated through 2 case studies. For prostate cancer, an in-house personalized screening tool, the Stockholm-3 model (S3M), is presented as an alternative to prostate-specific antigen testing alone. S3M is evaluated in a trial setting and plans for rollout in the population are discussed. For breast cancer, new biomarkers based on breast density and molecular profiles are developed and the US multicenter Women Informed to Screen Depending on Measures (WISDOM) trial is referred to for evaluation. While current eCPC data management uses a traditional data warehouse model, we discuss eCPC-developed features of a coherent data integration platform. E-Science tools are a key part of an evidence-based process for personalized medicine. This paper provides a structured workflow from data and models to evaluation of new personalized intervention strategies. The importance of multidisciplinary collaboration is emphasized. Importantly, the generic concepts of the suggested eCPC workflow are transferrable to other disease domains, although each disease will require tailored solutions. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Nurse-Technology Interactions and Patient Safety.
Ruppel, Halley; Funk, Marjorie
2018-06-01
Nurses are the end-users of most technology in intensive care units, and the ways in which they interact with technology affect quality of care and patient safety. Nurses' interactions include the processes of ensuring proper input of data into the technology as well as extracting and interpreting the output (clinical data, technical data, alarms). Current challenges in nurse-technology interactions for physiologic monitoring include issues regarding alarm management, workflow interruptions, and monitor surveillance. Patient safety concepts, like high reliability organizations and human factors, can advance efforts to enhance nurse-technology interactions. Copyright © 2018 Elsevier Inc. All rights reserved.
Specification and design of a Therapy Imaging and Model Management System (TIMMS)
NASA Astrophysics Data System (ADS)
Lemke, Heinz U.; Berliner, Leonard
2007-03-01
Appropriate use of Information and Communication Technology (ICT) and Mechatronic (MT) systems is considered by many experts as a significant contribution to improve workflow and quality of care in the Operating Room (OR). This will require a suitable IT infrastructure as well as communication and interface standards, such as DICOM and suitable extensions, to allow data interchange between surgical system components in the OR. A conceptual design of such an infrastructure, i.e. a Therapy Imaging and Model Management System (TIMMS) will be introduced in this paper. A TIMMS should support the essential functions that enable and advance image, and in particular, patient model guided therapy. Within this concept, the image centric world view of the classical PACS technology is complemented by an IT model-centric world view. Such a view is founded in the special modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. A proper design of a TIMMS, taking into account modern software engineering principles, such as service oriented architecture, will clarify the right position of interfaces and relevant standards for a Surgical Assist System (SAS) in general and their components specifically. Such a system needs to be designed to provide a highly modular structure. Modules may be defined on different granulation levels. A first list of components (e.g. high and low level modules) comprising engines and repositories of an SAS, which should be integrated by a TIMMS, will be introduced in this paper.
Knowledge management in the engineering design environment
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2006-01-01
The Aerospace and Defense industry is experiencing an increasing loss of knowledge through workforce reductions associated with business consolidation and retirement of senior personnel. Significant effort is being placed on process definition as part of ISO certification and, more recently, CMMI certification. The process knowledge in these efforts represents the simplest of engineering knowledge and many organizations are trying to get senior engineers to write more significant guidelines, best practices and design manuals. A new generation of design software, known as Product Lifecycle Management systems, has many mechanisms for capturing and deploying a wider variety of engineering knowledge than simple process definitions. These hold the promise of significant improvements through reuse of prior designs, codification of practices in workflows, and placement of detailed how-tos at the point of application.
Davis, Stephen Jerome; Hurtado, Josephine; Nguyen, Rosemary; Huynh, Tran; Lindon, Ivan; Hudnall, Cedric; Bork, Sara
2017-01-01
Background: USP <797> regulatory requirements have mandated that pharmacies improve aseptic techniques and cleanliness of the medication preparation areas. In addition, the Institute for Safe Medication Practices (ISMP) recommends that technology and automation be used as much as possible for preparing and verifying compounded sterile products. Objective: To determine the benefits associated with the implementation of the workflow management system, such as reducing medication preparation and delivery errors, reducing quantity and frequency of medication errors, avoiding costs, and enhancing the organization's decision to move toward positive patient identification (PPID). Methods: At Texas Children's Hospital, data were collected and analyzed from January 2014 through August 2014 in the pharmacy areas in which the workflow management system would be implemented. Data were excluded for September 2014 during the workflow management system oral liquid implementation phase. Data were collected and analyzed from October 2014 through June 2015 to determine whether the implementation of the workflow management system reduced the quantity and frequency of reported medication errors. Data collected and analyzed during the study period included the quantity of doses prepared, number of incorrect medication scans, number of doses discontinued from the workflow management system queue, and the number of doses rejected. Data were collected and analyzed to identify patterns of incorrect medication scans, to determine reasons for rejected medication doses, and to determine the reduction in wasted medications. Results: During the 17-month study period, the pharmacy department dispensed 1,506,220 oral liquid and injectable medication doses. From October 2014 through June 2015, the pharmacy department dispensed 826,220 medication doses that were prepared and checked via the workflow management system. Of those 826,220 medication doses, there were 16 reported incorrect volume errors. The error rate after the implementation of the workflow management system averaged 8.4%, which was a 1.6% reduction. After the implementation of the workflow management system, the average number of reported oral liquid medication and injectable medication errors decreased to 0.4 and 0.2 times per week, respectively. Conclusion: The organization was able to achieve its purpose and goal of improving the provision of quality pharmacy care through optimal medication use and safety by reducing medication preparation errors. Error rates decreased and the workflow processes were streamlined, which has led to seamless operations within the pharmacy department. There has been significant cost avoidance and waste reduction and enhanced interdepartmental satisfaction due to the reduction of reported medication errors.
Computational Challenges in the Analysis of Petrophysics Using Microtomography and Upscaling
NASA Astrophysics Data System (ADS)
Liu, J.; Pereira, G.; Freij-Ayoub, R.; Regenauer-Lieb, K.
2014-12-01
Microtomography provides detailed 3D internal structures of rocks in micro- to tens of nano-meter resolution and is quickly turning into a new technology for studying petrophysical properties of materials. An important step is the upscaling of these properties as micron or sub-micron resolution can only be done on the sample-scale of millimeters or even less than a millimeter. We present here a recently developed computational workflow for the analysis of microstructures including the upscaling of material properties. Computations of properties are first performed using conventional material science simulations at micro to nano-scale. The subsequent upscaling of these properties is done by a novel renormalization procedure based on percolation theory. We have tested the workflow using different rock samples, biological and food science materials. We have also applied the technique on high-resolution time-lapse synchrotron CT scans. In this contribution we focus on the computational challenges that arise from the big data problem of analyzing petrophysical properties and its subsequent upscaling. We discuss the following challenges: 1) Characterization of microtomography for extremely large data sets - our current capability. 2) Computational fluid dynamics simulations at pore-scale for permeability estimation - methods, computing cost and accuracy. 3) Solid mechanical computations at pore-scale for estimating elasto-plastic properties - computational stability, cost, and efficiency. 4) Extracting critical exponents from derivative models for scaling laws - models, finite element meshing, and accuracy. Significant progress in each of these challenges is necessary to transform microtomography from the current research problem into a robust computational big data tool for multi-scale scientific and engineering problems.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Simulation technology used for risky assessment in deep exploration project in China
NASA Astrophysics Data System (ADS)
jiao, J.; Huang, D.; Liu, J.
2013-12-01
Deep exploration has been carried out in China for five years in which various heavy duty instruments and equipments are employed for gravity, magnetic, seismic and electromagnetic data prospecting as well as ultra deep drilling rig established for obtaining deep samples, and so on. The deep exploration is a large and complex system engineering crossing multiple subjects with great investment. It is necessary to employ advanced technical means technology for verification, appraisal, and optimization of geographical prospecting equipment development. To reduce risk of the application and exploration, efficient and allegeable management concept and skills have to be enhanced in order to consolidate management measure and workflow to benefit the ambitious project. Therefore, evidence, prediction, evaluation and related decision strategies have to be taken into accouter simultaneously to meet practical scientific requests and technique limits and extendable attempts. Simulation technique is then proposed as a tool that can be used to carry out dynamic test on actual or imagined system. In practice, it is necessary to combine the simulation technique with the instruments and equipment to accomplish R&D tasks. In this paper, simulation technique is introduced into the R&D process of heavy-duty equipment and high-end engineering project technology. Based on the information provided by a drilling group recently, a digital model is constructed by combination of geographical data, 3d visualization, database management, and visual reality technologies together. It result in push ahead a R&D strategy, in which data processing , instrument application, expected result and uncertainty, and even operation workflow effect environment atmosphere are simulated systematically or simultaneously, in order to obtain an optimal consequence as well as equipment updating strategy. The simulation technology is able to adjust, verify, appraise and optimize the primary plan due to changing in the real world or process, which can provide new insight to the equipment to meet requests from application and construction process and facilitates by means of direct perception and understanding of installation, debugging and experimental process of key equipment for deep exploration. Finally, the objective of project cost conservation and risk reduction can be reasonably approached. Risk assessment can be used to quantitatively evaluate the possible degree of the impact. During the research and development stage, information from the installation, debugging and simulation demonstration of the experiment process of the key instrument and equipment are used to evaluate the fatigue and safety of the device. It needs fully understanding the controllable and uncontrollable risk factors during the process, and then adjusting and improving the unsafe risk factors in the risk assessment and prediction. With combination with professional Geo software to process and interpret the environment to obtain evaluation parameters, simulation modeling is more likely close to exploration target which need more details of evaluations. From micro and macro comprehensive angles to safety and risk assessment can be achieved to satisfy the purpose of reducing the risk of equipment development, and to avoid unnecessary loss on the way of the development.
Litho hotspots fixing using model based algorithm
NASA Astrophysics Data System (ADS)
Zhang, Meili; Yu, Shirui; Mao, Zhibiao; Shafee, Marwa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Hu, Xinyi; Wan, Qijian; Du, Chunshan
2017-04-01
As technology advances, IC designs are getting more sophisticated, thus it becomes more critical and challenging to fix printability issues in the design flow. Running lithography checks before tapeout is now mandatory for designers, which creates a need for more advanced and easy-to-use techniques for fixing hotspots found after lithographic simulation without creating a new design rule checking (DRC) violation or generating a new hotspot. This paper presents a new methodology for fixing hotspots on layouts while using the same engine currently used to detect the hotspots. The fix is achieved by applying minimum movement of edges causing the hotspot, with consideration of DRC constraints. The fix is internally simulated by the lithographic simulation engine to verify that the hotspot is eliminated and that no new hotspot is generated by the new edge locations. Hotspot fix checking is enhanced by adding DRC checks to the litho-friendly design (LFD) rule file to guarantee that any fix options that violate DRC checks are removed from the output hint file. This extra checking eliminates the need to re-run both DRC and LFD checks to ensure the change successfully fixed the hotspot, which saves time and simplifies the designer's workflow. This methodology is demonstrated on industrial designs, where the fixing rate of single and dual layer hotspots is reported.
Biological standards for the Knowledge-Based BioEconomy: What is at stake.
de Lorenzo, Víctor; Schmidt, Markus
2018-01-25
The contribution of life sciences to the Knowledge-Based Bioeconomy (KBBE) asks for the transition of contemporary, gene-based biotechnology from being a trial-and-error endeavour to becoming an authentic branch of engineering. One requisite to this end is the need for standards to measure and represent accurately biological functions, along with languages for data description and exchange. However, the inherent complexity of biological systems and the lack of quantitative tradition in the field have largely curbed this enterprise. Fortunately, the onset of systems and synthetic biology has emphasized the need for standards not only to manage omics data, but also to increase reproducibility and provide the means of engineering living systems in earnest. Some domains of biotechnology can be easily standardized (e.g. physical composition of DNA sequences, tools for genome editing, languages to encode workflows), while others might be standardized with some dedicated research (e.g. biological metrology, operative systems for bio-programming cells) and finally others will require a considerable effort, e.g. defining the rules that allow functional composition of biological activities. Despite difficulties, these are worthy attempts, as the history of technology shows that those who set/adopt standards gain a competitive advantage over those who do not. Copyright © 2017 Elsevier B.V. All rights reserved.
Yoo, Sooyoung; Hwang, Hee; Jheon, Sanghoon
2016-08-01
The different levels of health information technology (IT) adoption and its integration into hospital workflow can affect the maximization of the benefits of using of health IT. We aimed at sharing our experiences and the journey to the successful adoption of health IT over 13 years at a tertiary university hospital in South Korea. The integrated system of comprehensive applications for direct care, support care, and smart care has been implemented with the latest IT and a rich user information platform, achieving the fully digitized hospital. The users experience design methodology, barcode and radio-frequency identification (RFID) technologies, smartphone and mobile technologies, and data analytics were integrated into hospital workflow. Applications for user-centered electronic medical record (EMR) and clinical decision support (CDS), closed loop medication administration (CLMA), mobile EMR and dashboard system for care coordination, clinical data warehouse (CDW) system, and patient engagement solutions were designed and developed to improve quality of care, work efficiency, and patient safety. We believe that comprehensive electronic health record systems and patient-centered smart hospital applications will go a long way in ensuring seamless patient care and experience.
NASA Astrophysics Data System (ADS)
Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent
2010-03-01
When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.
How Memorial Hermann’s Online Payments Are Boosting Patient Loyalty and Revenue.
Ramos Hegwer, Laura
2016-01-01
The Houston-based health system has implemented new workflows and technology in 14 of its hospitals and across its care delivery network to make the payment process more patient-friendly and build consumer loyalty.
Larson, Paul S; Willie, Jon T; Vadivelu, Sudhakar; Azmi-Ghadimi, Hooman; Nichols, Amy; Fauerbach, Loretta Litz; Johnson, Helen Boehm; Graham, Denise
2017-07-01
The development of navigation technology facilitating MRI-guided stereotactic neurosurgery has enabled neurosurgeons to perform a variety of procedures ranging from deep brain stimulation to laser ablation entirely within an intraoperative or diagnostic MRI suite while having real-time visualization of brain anatomy. Prior to this technology, some of these procedures required multisite workflow patterns that presented significant risk to the patient during transport. For those facilities with access to this technology, safe practice guidelines exist only for procedures performed within an intraoperative MRI. There are currently no safe practice guidelines or parameters available for facilities looking to integrate this technology into practice in conventional MRI suites. Performing neurosurgical procedures in a diagnostic MRI suite does require precautionary measures. The relative novelty of technology and workflows for direct MRI-guided procedures requires consideration of safe practice recommendations, including those pertaining to infection control and magnet safety issues. This article proposes a framework of safe practice recommendations designed for assessing readiness and optimization of MRI-guided neurosurgical interventions in the diagnostic MRI suite in an effort to mitigate patient risk. The framework is based on existing clinical evidence, recommendations, and guidelines related to infection control and prevention, health care-associated infections, and magnet safety, as well as the clinical and practical experience of neurosurgeons utilizing this technology. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.
Low-Cost, High-Throughput Sequencing of DNA Assemblies Using a Highly Multiplexed Nextera Process.
Shapland, Elaine B; Holmes, Victor; Reeves, Christopher D; Sorokin, Elena; Durot, Maxime; Platt, Darren; Allen, Christopher; Dean, Jed; Serber, Zach; Newman, Jack; Chandran, Sunil
2015-07-17
In recent years, next-generation sequencing (NGS) technology has greatly reduced the cost of sequencing whole genomes, whereas the cost of sequence verification of plasmids via Sanger sequencing has remained high. Consequently, industrial-scale strain engineers either limit the number of designs or take short cuts in quality control. Here, we show that over 4000 plasmids can be completely sequenced in one Illumina MiSeq run for less than $3 each (15× coverage), which is a 20-fold reduction over using Sanger sequencing (2× coverage). We reduced the volume of the Nextera tagmentation reaction by 100-fold and developed an automated workflow to prepare thousands of samples for sequencing. We also developed software to track the samples and associated sequence data and to rapidly identify correctly assembled constructs having the fewest defects. As DNA synthesis and assembly become a centralized commodity, this NGS quality control (QC) process will be essential to groups operating high-throughput pipelines for DNA construction.
Design of low noise imaging system
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for low noise imaging system under the mode of global shutter, a complete imaging system is designed based on the SCMOS (Scientific CMOS) image sensor CIS2521F. The paper introduces hardware circuit and software system design. Based on the analysis of key indexes and technologies about the imaging system, the paper makes chips selection and decides SCMOS + FPGA+ DDRII+ Camera Link as processing architecture. Then it introduces the entire system workflow and power supply and distribution unit design. As for the software system, which consists of the SCMOS control module, image acquisition module, data cache control module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The imaging experimental results show that the imaging system exhibits a 2560*2160 pixel resolution, has a maximum frame frequency of 50 fps. The imaging quality of the system satisfies the requirement of the index.
NASA Astrophysics Data System (ADS)
Hess, M.; Robson, S.
2012-07-01
3D colour image data generated for the recording of small museum objects and archaeological finds are highly variable in quality and fitness for purpose. Whilst current technology is capable of extremely high quality outputs, there are currently no common standards or applicable guidelines in either the museum or engineering domain suited to scientific evaluation, understanding and tendering for 3D colour digital data. This paper firstly explains the rationale towards and requirements for 3D digital documentation in museums. Secondly it describes the design process, development and use of a new portable test object suited to sensor evaluation and the provision of user acceptance metrics. The test object is specifically designed for museums and heritage institutions and includes known surface and geometric properties which support quantitative and comparative imaging on different systems. The development for a supporting protocol will allow object reference data to be included in the data processing workflow with specific reference to conservation and curation.
2013-01-01
Background Information is lacking about the capacity of those working in community practice settings to utilize health information technology for colorectal cancer screening. Objective To address this gap we asked those working in community practice settings to share their perspectives about how the implementation of a Web-based patient-led decision aid might affect patient-clinician conversations about colorectal cancer screening and the day-to-day clinical workflow. Methods Five focus groups in five community practice settings were conducted with 8 physicians, 1 physician assistant, and 18 clinic staff. Focus groups were organized using a semistructured discussion guide designed to identify factors that mediate and impede the use of a Web-based decision aid intended to clarify patient preferences for colorectal cancer screening and to trigger shared decision making during the clinical encounter. Results All physicians, the physician assistant, and 8 of the 18 clinic staff were active participants in the focus groups. Clinician and staff participants from each setting reported a belief that the Web-based patient-led decision aid could be an informative and educational tool; in all but one setting participants reported a readiness to recommend the tool to patients. The exception related to clinicians from one clinic who described a preference for patients having fewer screening choices, noting that a colonoscopy was the preferred screening modality for patients in their clinic. Perceived barriers to utilizing the Web-based decision aid included patients’ lack of Internet access or low computer literacy, and potential impediments to the clinics’ daily workflow. Expanding patients’ use of an online decision aid that is both easy to access and understand and that is utilized by patients outside of the office visit was described as a potentially efficient means for soliciting patients’ screening preferences. Participants described that a system to link the online decision aid to a computerized reminder system could promote a better understanding of patients’ screening preferences, though some expressed concern that such a system could be difficult to keep up and running. Conclusions Community practice clinicians and staff perceived the Web-based decision aid technology as promising but raised questions as to how the technology and resultant information would be integrated into their daily practice workflow. Additional research investigating how to best implement online decision aids should be conducted prior to the widespread adoption of such technology so as to maximize the benefits of the technology while minimizing workflow disruptions. PMID:24351420
In-database processing of a large collection of remote sensing data: applications and implementation
NASA Astrophysics Data System (ADS)
Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina
2016-04-01
Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability between desktop GIS, web applications and geographic web services and interactive scientific applications (MATLAB, IPython). The system is also automatically ingesting direct readout data from meteorological and research satellites in near-real time with distributed acquisition workflows managed by Taverna workflow engine [2]. The system has demonstrated its utility in performing non-trivial analytic processing such as the computation of the Robust Satellite Technique (RST) indices [3]. It had been useful in different tasks such as studying urban heat islands, analyzing patterns in the distribution of wildfire occurrences, detecting phenomena related to seismic and earthquake activity. Initial experience has highlighted several limitations of the proposed approach yet it has demonstrated ability to facilitate the use of large archives of remote sensing data by geoscientists. 1. J.G. Acker, G. Leptoukh, Online analysis enhances use of NASA Earth science data. EOS Trans. AGU, 2007, 88(2), P. 14-17. 2. D. Hull, K. Wolsfencroft, R. Stevens, C. Goble, M.R. Pocock, P. Li and T. Oinn, Taverna: a tool for building and running workflows of services. Nucleic Acids Research. 2006. V. 34. P. W729-W732. 3. V. Tramutoli, G. Di Bello, N. Pergola, S. Piscitelli, Robust satellite techniques for remote sensing of seismically active areas // Annals of Geophysics. 2001. no. 44(2). P. 295-312.
Nanji, Karen C; Rothschild, Jeffrey M; Boehne, Jennifer J; Keohane, Carol A; Ash, Joan S; Poon, Eric G
2014-01-01
Electronic prescribing systems have often been promoted as a tool for reducing medication errors and adverse drug events. Recent evidence has revealed that adoption of electronic prescribing systems can lead to unintended consequences such as the introduction of new errors. The purpose of this study is to identify and characterize the unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in an outpatient pharmacy. A multidisciplinary team conducted direct observations of workflow in an independent pharmacy and semi-structured interviews with pharmacy staff members about their perceptions of the unrealized potential and residual consequences of electronic prescribing systems. We used qualitative methods to iteratively analyze text data using a grounded theory approach, and derive a list of major themes and subthemes related to the unrealized potential and residual consequences of electronic prescribing. We identified the following five themes: Communication, workflow disruption, cost, technology, and opportunity for new errors. These contained 26 unique subthemes representing different facets of our observations and the pharmacy staff's perceptions of the unrealized potential and residual consequences of electronic prescribing. We offer targeted solutions to improve electronic prescribing systems by addressing the unrealized potential and residual consequences that we identified. These recommendations may be applied not only to improve staff perceptions of electronic prescribing systems but also to improve the design and/or selection of these systems in order to optimize communication and workflow within pharmacies while minimizing both cost and the potential for the introduction of new errors.
Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset
2017-01-06
In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.
Bar Coding and Tracking in Pathology.
Hanna, Matthew G; Pantanowitz, Liron
2016-03-01
Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2016 Elsevier Inc. All rights reserved.
Clinical Note Creation, Binning, and Artificial Intelligence
Deliberato, Rodrigo Octávio; Stone, David J
2017-01-01
The creation of medical notes in software applications poses an intrinsic problem in workflow as the technology inherently intervenes in the processes of collecting and assembling information, as well as the production of a data-driven note that meets both individual and healthcare system requirements. In addition, the note writing applications in currently available electronic health records (EHRs) do not function to support decision making to any substantial degree. We suggest that artificial intelligence (AI) could be utilized to facilitate the workflows of the data collection and assembly processes, as well as to support the development of personalized, yet data-driven assessments and plans. PMID:28778845
Bar Coding and Tracking in Pathology.
Hanna, Matthew G; Pantanowitz, Liron
2015-06-01
Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2015 Elsevier Inc. All rights reserved.
Future Software Sizing Metrics and Estimation Challenges
2011-07-01
systems 4. Ultrahigh software system assurance 5. Legacy maintenance and Brownfield development 6. Agile and Lean/ Kanban development. This paper...refined as the design of the maintenance modifications or Brownfield re-engineering is determined. VII. 6. AGILE AND LEAN/ KANBAN DEVELOPMENT The...difficulties of software maintenance estimation can often be mitigated by using lean workflow management techniques such as Kanban [25]. In Kanban
Barbarito, Fulvio; Pinciroli, Francesco; Mason, John; Marceglia, Sara; Mazzola, Luca; Bonacina, Stefano
2012-08-01
Information technologies (ITs) have now entered the everyday workflow in a variety of healthcare providers with a certain degree of independence. This independence may be the cause of difficulty in interoperability between information systems and it can be overcome through the implementation and adoption of standards. Here we present the case of the Lombardy Region, in Italy, that has been able, in the last 10 years, to set up the Regional Social and Healthcare Information System, connecting all the healthcare providers within the region, and providing full access to clinical and health-related documents independently from the healthcare organization that generated the document itself. This goal, in a region with almost 10 millions citizens, was achieved through a twofold approach: first, the political and operative push towards the adoption of the Health Level 7 (HL7) standard within single hospitals and, second, providing a technological infrastructure for data sharing based on interoperability specifications recognized at the regional level for messages transmitted from healthcare providers to the central domain. The adoption of such regional interoperability specifications enabled the communication among heterogeneous systems placed in different hospitals in Lombardy. Integrating the Healthcare Enterprise (IHE) integration profiles which refer to HL7 standards are adopted within hospitals for message exchange and for the definition of integration scenarios. The IHE patient administration management (PAM) profile with its different workflows is adopted for patient management, whereas the Scheduled Workflow (SWF), the Laboratory Testing Workflow (LTW), and the Ambulatory Testing Workflow (ATW) are adopted for order management. At present, the system manages 4,700,000 pharmacological e-prescriptions, and 1,700,000 e-prescriptions for laboratory exams per month. It produces, monthly, 490,000 laboratory medical reports, 180,000 radiology medical reports, 180,000 first aid medical reports, and 58,000 discharge summaries. Hence, despite there being still work in progress, the Lombardy Region healthcare system is a fully interoperable social healthcare system connecting patients, healthcare providers, healthcare organizations, and healthcare professionals in a large and heterogeneous territory through the implementation of international health standards. Copyright © 2012 Elsevier Inc. All rights reserved.
Experiencing Soil Science from your office through virtual experiences
NASA Astrophysics Data System (ADS)
Beato, M. Carmen; González-Merino, Ramón; Campillo, M. Carmen; Fernández-Ahumada, Elvira; Ortiz, Leovigilda; Taguas, Encarnación V.; Guerrero, José Emilio
2017-04-01
Currently, numerous tools based on the new information and communication technologies offer a wide range of possibilities for the implementation of interactive methodologies in Education and Science. In particular, virtual reality and immersive worlds - artificially generated computer environments where users interact through a figurative individual that represents them in that environment (their "avatar") - have been identified as the technology that will change the way we live, particularly in educational terms, product development and entertainment areas (Schmorrow, 2009). Gisbert-Cervera et al. (2011) consider that the 3D worlds in education, among others, provide a unique training and exchange of knowledge environment which allows a goal reflection to support activities and achieve learning outcomes. In Soil Sciences, the experimental component is essential to acquire the necessary knowledge to understand the biogeochemical processes taking place and their interactions with time, climate, topography and living organisms present. In this work, an immersive virtual environment which reproduces a series of pits have been developed to evaluate and differentiate soil characteristics such as texture, structure, consistency, color and other physical-chemical and biological properties for educational purposes. Bibliographical material such as pictures, books, papers and were collected in order to classify the information needed and to build the soil profiles into the virtual environment. The programming language for the virtual recreation was Unreal Engine4 (UE4; https://www.unrealengine.com/unreal-engine-4). This program was chosen because it provides two toolsets for programmers and it can also be used in tandem to accelerate development workflows. In addition, Unreal Engine4 technology powers hundreds of games as well as real-time 3D films, training simulations, visualizations and it creates very realistic graphics. For the evaluation of its impact and its usefulness in teaching, a series of surveys will be presented to undergraduate students and teachers. REFERENCES: Gisbert-Cervera M, Esteve-Gonzalez V., Camacho-Marti M.M. (2011). Delve into the Deep: Learning Potential in Metaverses and 3D Worlds. eLearning (25) Papers ISSN: 1887-1542 Schmorrow D.D. (2009). Why virtual? Theoretical Issues in Ergonomics Science 10(3): 279-282.
ERIC Educational Resources Information Center
Sarling, Jo
2005-01-01
This article gives details of the developments and changes in the Denver Public Library (DPL). Through a review of advancements in technology, vendor capabilities, staffing levels and talent, as well as outsourcing opportunities, DPL reinvented its workflow and processing. The result? The once giant stacks of books, CDs, videos, and DVDs waiting…
Learning the Language of Healthcare Enabling Semantic Web Technology in CHCS
2013-09-01
tuples”, (subject, predicate, object), to relate data and achieve semantic interoperability . Other similar technologies exist, but their... Semantic Healthcare repository [5]. Ultimately, both of our data approaches were successful. However, our current test system is based on the CPRS demo...to extract system dependencies and workflows; to extract semantically related patient data ; and to browse patient- centric views into the system . We
Loeve, Arjo J; Al-Issawi, Jumana; Fernandez-Gutiérrez, Fabiola; Langø, Thomas; Strehlow, Jan; Haase, Sabrina; Matzko, Matthias; Napoli, Alessandro; Melzer, Andreas; Dankelman, Jenny
2016-04-01
Magnetic resonance guided focused ultrasound surgery (MRgFUS) has become an attractive, non-invasive treatment for benign and malignant tumours, and offers specific benefits for poorly accessible locations in the liver. However, the presence of the ribcage and the occurrence of liver motion due to respiration limit the applicability MRgFUS. Several techniques are being developed to address these issues or to decrease treatment times in other ways. However, the potential benefit of such improvements has not been quantified. In this research, the detailed workflow of current MRgFUS procedures was determined qualitatively and quantitatively by using observation studies on uterine MRgFUS interventions, and the bottlenecks in MRgFUS were identified. A validated simulation model based on discrete events simulation was developed to quantitatively predict the effect of new technological developments on the intervention duration of MRgFUS on the liver. During the observation studies, the duration and occurrence frequencies of all actions and decisions in the MRgFUS workflow were registered, as were the occurrence frequencies of motion detections and intervention halts. The observation results show that current MRgFUS uterine interventions take on average 213min. Organ motion was detected on average 2.9 times per intervention, of which on average 1.0 actually caused a need for rework. Nevertheless, these motion occurrences and the actions required to continue after their detection consumed on average 11% and up to 29% of the total intervention duration. The simulation results suggest that, depending on the motion occurrence frequency, the addition of new technology to automate currently manual MRgFUS tasks and motion compensation could potentially reduce the intervention durations by 98.4% (from 256h 5min to 4h 4min) in the case of 90% motion occurrence, and with 24% (from 5h 19min to 4h 2min) in the case of no motion. In conclusion, new tools were developed to predict how intervention durations will be affected by future workflow changes and by the introduction of new technology. Copyright © 2016 Elsevier Inc. All rights reserved.
National Positioning, Navigation, and Timing Architecture Study [Executive Summary
DOT National Transportation Integrated Search
2008-09-01
This primary objective of this study is to develop a comprehensive workflow to guide Oklahoma DOT to apply the LIDAR technology to landslide monitoring and risk assessment on Oklahoma highways. In addition to the primary objective, this study also ai...
Computer Supported Cooperative Work in Information Search and Retrieval.
ERIC Educational Resources Information Center
Twidale, Michael B.; Nichols, David M.
1998-01-01
Considers how research in collaborative technologies can inform research and development in library and information science. Topics include computer supported collaborative work; shared drawing; collaborative writing; MUDs; MOOs; workflow; World Wide Web; collaborative learning; computer mediated communication; ethnography; evaluation; remote…
2011-02-01
Process Architecture Technology Analysis: Executive .............................................. 15 UIMA as Executive...44 A.4: Flow Code in UIMA ......................................................................................................... 46... UIMA ................................................................................................................................ 57 E.2
Task Management in the New ATLAS Production System
NASA Astrophysics Data System (ADS)
De, K.; Golubkov, D.; Klimentov, A.; Potekhin, M.; Vaniachine, A.; Atlas Collaboration
2014-06-01
This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.
Design and implementation of an internet-based electrical engineering laboratory.
He, Zhenlei; Shen, Zhangbiao; Zhu, Shanan
2014-09-01
This paper describes an internet-based electrical engineering laboratory (IEE-Lab) with virtual and physical experiments at Zhejiang University. In order to synthesize the advantages of both experiment styles, the IEE-Lab is come up with Client/Server/Application framework and combines the virtual and physical experiments. The design and workflow of IEE-Lab are introduced. The analog electronic experiment is taken as an example to show Flex plug-in design, data communication based on XML (Extensible Markup Language), experiment simulation modeled by Modelica and control terminals' design. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Genetic Design Automation: engineering fantasy or scientific renewal?
Lux, Matthew W.; Bramlett, Brian W.; Ball, David A.; Peccoud, Jean
2013-01-01
Synthetic biology aims to make genetic systems more amenable to engineering, which has naturally led to the development of Computer-Aided Design (CAD) tools. Experimentalists still primarily rely on project-specific ad-hoc workflows instead of domain-specific tools, suggesting that CAD tools are lagging behind the front line of the field. Here, we discuss the scientific hurdles that have limited the productivity gains anticipated from existing tools. We argue that the real value of efforts to develop CAD tools is the formalization of genetic design rules that determine the complex relationships between genotype and phenotype. PMID:22001068
Cornwell, MacIntosh; Vangala, Mahesh; Taing, Len; Herbert, Zachary; Köster, Johannes; Li, Bo; Sun, Hanfei; Li, Taiwen; Zhang, Jian; Qiu, Xintao; Pun, Matthew; Jeselsohn, Rinath; Brown, Myles; Liu, X Shirley; Long, Henry W
2018-04-12
RNA sequencing has become a ubiquitous technology used throughout life sciences as an effective method of measuring RNA abundance quantitatively in tissues and cells. The increase in use of RNA-seq technology has led to the continuous development of new tools for every step of analysis from alignment to downstream pathway analysis. However, effectively using these analysis tools in a scalable and reproducible way can be challenging, especially for non-experts. Using the workflow management system Snakemake we have developed a user friendly, fast, efficient, and comprehensive pipeline for RNA-seq analysis. VIPER (Visualization Pipeline for RNA-seq analysis) is an analysis workflow that combines some of the most popular tools to take RNA-seq analysis from raw sequencing data, through alignment and quality control, into downstream differential expression and pathway analysis. VIPER has been created in a modular fashion to allow for the rapid incorporation of new tools to expand the capabilities. This capacity has already been exploited to include very recently developed tools that explore immune infiltrate and T-cell CDR (Complementarity-Determining Regions) reconstruction abilities. The pipeline has been conveniently packaged such that minimal computational skills are required to download and install the dozens of software packages that VIPER uses. VIPER is a comprehensive solution that performs most standard RNA-seq analyses quickly and effectively with a built-in capacity for customization and expansion.
CEREC CAD/CAM Chairside System
SANNINO, G.; GERMANO, F.; ARCURI, L.; BIGELLI, E.; ARCURI, C.; BARLATTANI, A.
2014-01-01
SUMMARY Purpose. The aim of this paper was to describe the CEREC 3 chairside system, providing the clinicians a detailed analysis of the whole digital workflow. Benefits and limitations of this technology compared with the conventional prosthetic work-flow were also highlighted and discussed. Materials and methods. Clinical procedures (tooth preparation, impression taking, adhesive luting), operational components and their capabilities as well as restorative materials used with CEREC 3 chairside system were reported. Results. The CEREC system has shown many positive aspects that make easier, faster and less expensive the prosthetic workflow. The operator-dependent errors are minimized compared to the conventional prosthetic protocol. Furthermore, a better acceptance level for the impression procedure has shown by the patients. The only drawback could be the subgingival placement of the margins compared with the supra/juxta gingival margins, since more time was required for the impression taking as well as the adhesive luting phase. The biocopy project seemed to be the best tool to obtain functionalized surfaces and keep unchanged gnathological data. Material selection was related to type of restoration. Conclusions. The evidence of our clinical practice suggests that CEREC 3 chairside system allows to produce highly aesthetic and reliable restorations in a single visit, while minimizing costs and patient discomfort during prosthetic treatment. However improvements in materials and technologies are needed in order to overcome the actual drawbacks. PMID:25992260
Autonomous Metabolomics for Rapid Metabolite Identification in Global Profiling
Benton, H. Paul; Ivanisevic, Julijana; Mahieu, Nathaniel G.; ...
2014-12-12
An autonomous metabolomic workflow combining mass spectrometry analysis with tandem mass spectrometry data acquisition was designed to allow for simultaneous data processing and metabolite characterization. Although previously tandem mass spectrometry data have been generated on the fly, the experiments described herein combine this technology with the bioinformatic resources of XCMS and METLIN. We can analyze large profiling datasets and simultaneously obtain structural identifications, as a result of this unique integration. Furthermore, validation of the workflow on bacterial samples allowed the profiling on the order of a thousand metabolite features with simultaneous tandem mass spectra data acquisition. The tandem mass spectrometrymore » data acquisition enabled automatic search and matching against the METLIN tandem mass spectrometry database, shortening the current workflow from days to hours. Overall, the autonomous approach to untargeted metabolomics provides an efficient means of metabolomic profiling, and will ultimately allow the more rapid integration of comparative analyses, metabolite identification, and data analysis at a systems biology level.« less
A computational workflow for designing silicon donor qubits
Humble, Travis S.; Ericson, M. Nance; Jakowski, Jacek; ...
2016-09-19
Developing devices that can reliably and accurately demonstrate the principles of superposition and entanglement is an on-going challenge for the quantum computing community. Modeling and simulation offer attractive means of testing early device designs and establishing expectations for operational performance. However, the complex integrated material systems required by quantum device designs are not captured by any single existing computational modeling method. We examine the development and analysis of a multi-staged computational workflow that can be used to design and characterize silicon donor qubit systems with modeling and simulation. Our approach integrates quantum chemistry calculations with electrostatic field solvers to performmore » detailed simulations of a phosphorus dopant in silicon. We show how atomistic details can be synthesized into an operational model for the logical gates that define quantum computation in this particular technology. In conclusion, the resulting computational workflow realizes a design tool for silicon donor qubits that can help verify and validate current and near-term experimental devices.« less
CASAS: A tool for composing automatically and semantically astrophysical services
NASA Astrophysics Data System (ADS)
Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.
2017-07-01
Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.
Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.
2006-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.
Complete Dentures Fabricated with CAD/CAM Technology and a Traditional Clinical Recording Method.
Janeva, Nadica; Kovacevska, Gordana; Janev, Edvard
2017-10-15
The introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) technology into complete denture (CD) fabrication ushered in a new era in removable prosthodontics. Commercially available CAD/CAM denture systems are expected to improve upon the disadvantages associated with conventional fabrication. The purpose of this report is to present the workflow involved in fabricating a CD with a traditional clinical recording method and CAD/CAM technology and to summarize the advantages to the dental practitioner and the patient.
Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing
NASA Astrophysics Data System (ADS)
Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.
2011-12-01
Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly better performance than the local machine. Much of the difference was due to newer equipment in the Nebula than the legacy computer, which is suggestive of a potential economic advantage beyond elastic power, i.e., access to up-to-date hardware vs. legacy hardware that must be maintained past its prime to amortize the cost. In addition to a trade study of advantages and challenges of porting complex processing to the cloud, a tutorial was developed to enable further progress in utilizing the Nebula for Earth Science applications and understanding better the potential for Cloud Computing in further data- and computing-intensive Earth Science research. In particular, highly bursty computing such as that experienced in the user-demand-driven Giovanni system may become more tractable in a Cloud environment. Our future work will continue to focus on migrating more GES DISC's applications/instances, e.g. Giovanni instances, to the Nebula platform and making matured migrated applications to be in operation on the Nebula.
The complete digital workflow in fixed prosthodontics: a systematic review.
Joda, Tim; Zarone, Fernando; Ferrari, Marco
2017-09-19
The continuous development in dental processing ensures new opportunities in the field of fixed prosthodontics in a complete virtual environment without any physical model situations. The aim was to compare fully digitalized workflows to conventional and/or mixed analog-digital workflows for the treatment with tooth-borne or implant-supported fixed reconstructions. A PICO strategy was executed using an electronic (MEDLINE, EMBASE, Google Scholar) plus manual search up to 2016-09-16 focusing on RCTs investigating complete digital workflows in fixed prosthodontics with regard to economics or esthetics or patient-centered outcomes with or without follow-up or survival/success rate analysis as well as complication assessment of at least 1 year under function. The search strategy was assembled from MeSH-Terms and unspecific free-text words: {(("Dental Prosthesis" [MeSH]) OR ("Crowns" [MeSH]) OR ("Dental Prosthesis, Implant-Supported" [MeSH])) OR ((crown) OR (fixed dental prosthesis) OR (fixed reconstruction) OR (dental bridge) OR (implant crown) OR (implant prosthesis) OR (implant restoration) OR (implant reconstruction))} AND {("Computer-Aided Design" [MeSH]) OR ((digital workflow) OR (digital technology) OR (computerized dentistry) OR (intraoral scan) OR (digital impression) OR (scanbody) OR (virtual design) OR (digital design) OR (cad/cam) OR (rapid prototyping) OR (monolithic) OR (full-contour))} AND {("Dental Technology" [MeSH) OR ((conventional workflow) OR (lost-wax-technique) OR (porcelain-fused-to-metal) OR (PFM) OR (implant impression) OR (hand-layering) OR (veneering) OR (framework))} AND {(("Study, Feasibility" [MeSH]) OR ("Survival" [MeSH]) OR ("Success" [MeSH]) OR ("Economics" [MeSH]) OR ("Costs, Cost Analysis" [MeSH]) OR ("Esthetics, Dental" [MeSH]) OR ("Patient Satisfaction" [MeSH])) OR ((feasibility) OR (efficiency) OR (patient-centered outcome))}. Assessment of risk of bias in selected studies was done at a 'trial level' including random sequence generation, allocation concealment, blinding, completeness of outcome data, selective reporting, and other bias using the Cochrane Collaboration tool. A judgment of risk of bias was assigned if one or more key domains had a high or unclear risk of bias. An official registration of the systematic review was not performed. The systematic search identified 67 titles, 32 abstracts thereof were screened, and subsequently, three full-texts included for data extraction. Analysed RCTs were heterogeneous without follow-up. One study demonstrated that fully digitally produced dental crowns revealed the feasibility of the process itself; however, the marginal precision was lower for lithium disilicate (LS2) restorations (113.8 μm) compared to conventional metal-ceramic (92.4 μm) and zirconium dioxide (ZrO2) crowns (68.5 μm) (p < 0.05). Another study showed that leucite-reinforced glass ceramic crowns were esthetically favoured by the patients (8/2 crowns) and clinicians (7/3 crowns) (p < 0.05). The third study investigated implant crowns. The complete digital workflow was more than twofold faster (75.3 min) in comparison to the mixed analog-digital workflow (156.6 min) (p < 0.05). No RCTs could be found investigating multi-unit fixed dental prostheses (FDP). The number of RCTs testing complete digital workflows in fixed prosthodontics is low. Scientifically proven recommendations for clinical routine cannot be given at this time. Research with high-quality trials seems to be slower than the industrial progress of available digital applications. Future research with well-designed RCTs including follow-up observation is compellingly necessary in the field of complete digital processing.
ROI on yield data analysis systems through a business process management strategy
NASA Astrophysics Data System (ADS)
Rehani, Manu; Strader, Nathan; Hanson, Jeff
2005-05-01
The overriding motivation for yield engineering is profitability. This is achieved through application of yield management. The first application is to continually reduce waste in the form of yield loss. New products, new technologies and the dynamic state of the process and equipment keep introducing new ways to cause yield loss. In response, the yield management efforts have to continually come up with new solutions to minimize it. The second application of yield engineering is to aid in accurate product pricing. This is achieved through predicting future results of the yield engineering effort. The more accurate the yield prediction, the more accurate the wafer start volume, the more accurate the wafer pricing. Another aspect of yield prediction pertains to gauging the impact of a yield problem and predicting how long that will last. The ability to predict such impacts again feeds into wafer start calculations and wafer pricing. The question then is that if the stakes on yield management are so high why is it that most yield management efforts are run like science and engineering projects and less like manufacturing? In the eighties manufacturing put the theory of constraints1 into practice and put a premium on stability and predictability in manufacturing activities, why can't the same be done for yield management activities? This line of introspection led us to define and implement a business process to manage the yield engineering activities. We analyzed the best known methods (BKM) and deployed a workflow tool to make them the standard operating procedure (SOP) for yield managment. We present a case study in deploying a Business Process Management solution for Semiconductor Yield Engineering in a high-mix ASIC environment. We will present a description of the situation prior to deployment, a window into the development process and a valuation of the benefits.
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
Mahieu, Nathaniel G.; Spalding, Jonathan L.; Patti, Gary J.
2016-01-01
Motivation: Current informatic techniques for processing raw chromatography/mass spectrometry data break down under several common, non-ideal conditions. Importantly, hydrophilic liquid interaction chromatography (a key separation technology for metabolomics) produces data which are especially challenging to process. We identify three critical points of failure in current informatic workflows: compound specific drift, integration region variance, and naive missing value imputation. We implement the Warpgroup algorithm to address these challenges. Results: Warpgroup adds peak subregion detection, consensus integration bound detection, and intelligent missing value imputation steps to the conventional informatic workflow. When compared with the conventional workflow, Warpgroup made major improvements to the processed data. The coefficient of variation for peaks detected in replicate injections of a complex Escherichia Coli extract were halved (a reduction of 19%). Integration regions across samples were much more robust. Additionally, many signals lost by the conventional workflow were ‘rescued’ by the Warpgroup refinement, thereby resulting in greater analyte coverage in the processed data. Availability and implementation: Warpgroup is an open source R package available on GitHub at github.com/nathaniel-mahieu/warpgroup. The package includes example data and XCMS compatibility wrappers for ease of use. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: nathaniel.mahieu@wustl.edu or gjpattij@wustl.edu PMID:26424859
Improved compliance by BPM-driven workflow automation.
Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin
2014-12-01
Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.
The Cherenkov Telescope Array Observatory: top level use cases
NASA Astrophysics Data System (ADS)
Bulgarelli, A.; Kosack, K.; Hinton, J.; Tosti, G.; Schwanke, U.; Schwarz, J.; Colomé, P.; Conforti, V.; Khelifi, B.; Goullon, J.; Ong, R.; Markoff, S.; Contreras, J. L.; Lucarelli, F.; Antonelli, L. A.; Bigongiari, C.; Boisson, C.; Bosnjak, Z.; Brau-Nogué, S.; Carosi, A.; Chen, A.; Cotter, G.; Covino, S.; Daniel, M.; De Cesare, G.; de Ona Wilhelmi, E.; Della Volpe, M.; Di Pierro, F.; Fioretti, V.; Füßling, M.; Garczarczyk, M.; Gaug, M.; Glicenstein, J. F.; Goldoni, P.; Götz, D.; Grandi, P.; Heller, M.; Hermann, G.; Inoue, S.; Knödlseder, J.; Lenain, J.-P.; Lindfors, E.; Lombardi, S.; Luque-Escamilla, P.; Maier, G.; Marisaldi, M.; Mundell, C.; Neyroud, N.; Noda, K.; O'Brien, P.; Petrucci, P. O.; Martí Ribas, J.; Ribó, M.; Rodriguez, J.; Romano, P.; Schmid, J.; Serre, N.; Sol, H.; Schussler, F.; Stamerra, A.; Stolarczyk, T.; Vandenbrouck, J.; Vercellone, S.; Vergani, S.; Zech, A.; Zoli, A.
2016-08-01
Today the scientific community is facing an increasing complexity of the scientific projects, from both a technological and a management point of view. The reason for this is in the advance of science itself, where new experiments with unprecedented levels of accuracy, precision and coverage (time and spatial) are realised. Astronomy is one of the fields of the physical sciences where a strong interaction between the scientists, the instrument and software developers is necessary to achieve the goals of any Big Science Project. The Cherenkov Telescope Array (CTA) will be the largest ground-based very high-energy gamma-ray observatory of the next decades. To achieve the full potential of the CTA Observatory, the system must be put into place to enable users to operate the telescopes productively. The software will cover all stages of the CTA system, from the preparation of the observing proposals to the final data reduction, and must also fit into the overall system. Scientists, engineers, operators and others will use the system to operate the Observatory, hence they should be involved in the design process from the beginning. We have organised a workgroup and a workflow for the definition of the CTA Top Level Use Cases in the context of the Requirement Management activities of the CTA Observatory. Scientists, instrument and software developers are collaborating and sharing information to provide a common and general understanding of the Observatory from a functional point of view. Scientists that will use the CTA Observatory will provide mainly Science Driven Use Cases, whereas software engineers will subsequently provide more detailed Use Cases, comments and feedbacks. The main purposes are to define observing modes and strategies, and to provide a framework for the flow down of the Use Cases and requirements to check missing requirements and the already developed Use-Case models at CTA sub-system level. Use Cases will also provide the basis for the definition of the Acceptance Test Plan for the validation of the overall CTA system. In this contribution we present the organisation and the workflow of the Top Level Use Cases workgroup.
Blijleven, Vincent; Koelemeijer, Kitty; Wetzels, Marijntje; Jaspers, Monique
2017-10-05
Health care providers resort to informal temporary practices known as workarounds for handling exceptions to normal workflow unintendedly imposed by electronic health record systems (EHRs). Although workarounds may seem favorable at first sight, they are generally suboptimal and may jeopardize patient safety, effectiveness of care, and efficiency of care. Research into the scope and impact of EHR workarounds on patient care processes is scarce. This paper provides insight into the effects of EHR workarounds on organizational workflows and outcomes of care services by identifying EHR workarounds and determining their rationales, scope, and impact on health care providers' workflows, patient safety, effectiveness of care, and efficiency of care. Knowing the rationale of a workaround provides valuable clues about the source of origin of each workaround and how each workaround could most effectively be resolved. Knowing the scope and impact a workaround has on EHR-related safety, effectiveness, and efficiency provides insight into how to address related concerns. Direct observations and follow-up semistructured interviews with 31 physicians, 13 nurses, and 3 clerks and qualitative bottom-up coding techniques was used to identify, analyze, and classify EHR workarounds. The research was conducted within 3 specialties and settings at a large university hospital. Rationales were associated with work system components (persons, technology and tools, tasks, organization, and physical environment) of the Systems Engineering Initiative for Patient Safety (SEIPS) framework to reveal their source of origin as well as to determine the scope and the impact of each EHR workaround from a structure-process-outcome perspective. A total of 15 rationales for EHR workarounds were identified of which 5 were associated with persons, 4 with technology and tools, 4 with the organization, and 2 with the tasks. Three of these 15 rationales for EHR workarounds have not been identified in prior research: data migration policy, enforced data entry, and task interference. EHR workaround rationales associated with different SEIPS work system components demand a different approach to be resolved. Persons-related workarounds may most effectively be resolved through personal training, organization-related workarounds through reviewing organizational policy and regulations, tasks-related workarounds through process redesign, and technology- and tools-related workarounds through EHR redesign efforts. Furthermore, insights gained from knowing a workaround's degree of influence as well as impact on patient safety, effectiveness of care, and efficiency of care can inform design and redesign of EHRs to further align EHR design with work contexts, subsequently leading to better organization and (safe) provision of care. In doing so, a research team in collaboration with all stakeholders could use the SEIPS framework to reflect on the current and potential future configurations of the work system to prevent unfavorable workarounds from occurring and how a redesign of the EHR would impact interactions between the work system components. ©Vincent Blijleven, Kitty Koelemeijer, Marijntje Wetzels, Monique Jaspers. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 05.10.2017.
Accelerating Cancer Systems Biology Research through Semantic Web Technology
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.
2012-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758
Accelerating cancer systems biology research through Semantic Web technology.
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S
2013-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
Flexible Early Warning Systems with Workflows and Decision Tables
NASA Astrophysics Data System (ADS)
Riedel, F.; Chaves, F.; Zeiner, H.
2012-04-01
An essential part of early warning systems and systems for crisis management are decision support systems that facilitate communication and collaboration. Often official policies specify how different organizations collaborate and what information is communicated to whom. For early warning systems it is crucial that information is exchanged dynamically in a timely manner and all participants get exactly the information they need to fulfil their role in the crisis management process. Information technology obviously lends itself to automate parts of the process. We have experienced however that in current operational systems the information logistics processes are hard-coded, even though they are subject to change. In addition, systems are tailored to the policies and requirements of a certain organization and changes can require major software refactoring. We seek to develop a system that can be deployed and adapted to multiple organizations with different dynamic runtime policies. A major requirement for such a system is that changes can be applied locally without affecting larger parts of the system. In addition to the flexibility regarding changes in policies and processes, the system needs to be able to evolve; when new information sources become available, it should be possible to integrate and use these in the decision process. In general, this kind of flexibility comes with a significant increase in complexity. This implies that only IT professionals can maintain a system that can be reconfigured and adapted; end-users are unable to utilise the provided flexibility. In the business world similar problems arise and previous work suggested using business process management systems (BPMS) or workflow management systems (WfMS) to guide and automate early warning processes or crisis management plans. However, the usability and flexibility of current WfMS are limited, because current notations and user interfaces are still not suitable for end-users, and workflows are usually only suited for rigid processes. We show how improvements can be achieved by using decision tables and rule-based adaptive workflows. Decision tables have been shown to be an intuitive tool that can be used by domain experts to express rule sets that can be interpreted automatically at runtime. Adaptive workflows use a rule-based approach to increase the flexibility of workflows by providing mechanisms to adapt workflows based on context changes, human intervention and availability of services. The combination of workflows, decision tables and rule-based adaption creates a framework that opens up new possibilities for flexible and adaptable workflows, especially, for use in early warning and crisis management systems.
Potential of knowledge discovery using workflows implemented in the C3Grid
NASA Astrophysics Data System (ADS)
Engel, Thomas; Fink, Andreas; Ulbrich, Uwe; Schartner, Thomas; Dobler, Andreas; Fritzsch, Bernadette; Hiller, Wolfgang; Bräuer, Benny
2013-04-01
With the increasing number of climate simulations, reanalyses and observations, new infrastructures to search and analyse distributed data are necessary. In recent years, the Grid architecture became an important technology to fulfill these demands. For the German project "Collaborative Climate Community Data and Processing Grid" (C3Grid) computer scientists and meteorologists developed a system that offers its users a webinterface to search and download climate data and use implemented analysis tools (called workflows) to further investigate them. In this contribution, two workflows that are implemented in the C3Grid architecture are presented: the Cyclone Tracking (CT) and Stormtrack workflow. They shall serve as an example on how to perform numerous investigations on midlatitude winterstorms on a large amount of analysis and climate model data without having an insight into the data source, program code and a low-to-moderate understanding of the theortical background. CT is based on the work of Murray and Simmonds (1991) to identify and track local minima in the mean sea level pressure (MSLP) field of the selected dataset. Adjustable thresholds for the curvature of the isobars as well as the minimum lifetime of a cyclone allow the distinction of weak subtropical heat low systems and stronger midlatitude cyclones e.g. in the Northern Atlantic. The user gets the resulting track data including statistics about the track density, average central pressure, average central curvature, cyclogenesis and cyclolysis as well as pre-built visualizations of these results. Stormtrack calculates the 2.5-6 day bandpassfiltered standard deviation of the geopotential height on a selected pressure level. Although this workflow needs much less computational effort compared to CT it shows structures that are in good agreement with the track density of the CT workflow. To what extent changes in the mid-level tropospheric storm track are reflected in trough density and intensity alteration of surface cyclones. A specific feature of C3Grid is the flexible Workflow Scheduling Service (WSS) which also allows for automated nightly analysis runs of CT, Stormtrack, etc. with different input parameter sets. The statistical results of these workflows can be accumulated afterwards by a scheduled final analysis step, thereby providing a tool for data intensive analytics for the massive amounts of climate model data accessible through C3Grid. First tests with these automated analysis workflows show promising results to speed up the investigation of high volume modeling data. This example is relevant to the thorough analysis of future changes in storminess in Europe and is just one example of the potential of knowledge discovery using automated workflows implemented in the C3Grid architecture.
High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away
NASA Astrophysics Data System (ADS)
Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.
2012-09-01
By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.
Patterson, Emily S.; Lowry, Svetlana Z.; Ramaiah, Mala; Gibbons, Michael C.; Brick, David; Calco, Robert; Matton, Greg; Miller, Anne; Makar, Ellen; Ferrer, Jorge A.
2015-01-01
Introduction: Human factors workflow analyses in healthcare settings prior to technology implemented are recommended to improve workflow in ambulatory care settings. In this paper we describe how insights from a workflow analysis conducted by NIST were implemented in a software prototype developed for a Veteran’s Health Administration (VHA) VAi2 innovation project and associated lessons learned. Methods: We organize the original recommendations and associated stages and steps visualized in process maps from NIST and the VA’s lessons learned from implementing the recommendations in the VAi2 prototype according to four stages: 1) before the patient visit, 2) during the visit, 3) discharge, and 4) visit documentation. NIST recommendations to improve workflow in ambulatory care (outpatient) settings and process map representations were based on reflective statements collected during one-hour discussions with three physicians. The development of the VAi2 prototype was conducted initially independently from the NIST recommendations, but at a midpoint in the process development, all of the implementation elements were compared with the NIST recommendations and lessons learned were documented. Findings: Story-based displays and templates with default preliminary order sets were used to support scheduling, time-critical notifications, drafting medication orders, and supporting a diagnosis-based workflow. These templates enabled customization to the level of diagnostic uncertainty. Functionality was designed to support cooperative work across interdisciplinary team members, including shared documentation sessions with tracking of text modifications, medication lists, and patient education features. Displays were customized to the role and included access for consultants and site-defined educator teams. Discussion: Workflow, usability, and patient safety can be enhanced through clinician-centered design of electronic health records. The lessons learned from implementing NIST recommendations to improve workflow in ambulatory care using an EHR provide a first step in moving from a billing-centered perspective on how to maintain accurate, comprehensive, and up-to-date information about a group of patients to a clinician-centered perspective. These recommendations point the way towards a “patient visit management system,” which incorporates broader notions of supporting workload management, supporting flexible flow of patients and tasks, enabling accountable distributed work across members of the clinical team, and supporting dynamic tracking of steps in tasks that have longer time distributions. PMID:26290887
Ho, Jonhan; Aridor, Orly; Parwani, Anil V.
2012-01-01
Background: For decades anatomic pathology (AP) workflow have been a highly manual process based on the use of an optical microscope and glass slides. Recent innovations in scanning and digitizing of entire glass slides are accelerating a move toward widespread adoption and implementation of a workflow based on digital slides and their supporting information management software. To support the design of digital pathology systems and ensure their adoption into pathology practice, the needs of the main users within the AP workflow, the pathologists, should be identified. Contextual inquiry is a qualitative, user-centered, social method designed to identify and understand users’ needs and is utilized for collecting, interpreting, and aggregating in-detail aspects of work. Objective: Contextual inquiry was utilized to document current AP workflow, identify processes that may benefit from the introduction of digital pathology systems, and establish design requirements for digital pathology systems that will meet pathologists’ needs. Materials and Methods: Pathologists were observed and interviewed at a large academic medical center according to contextual inquiry guidelines established by Holtzblatt et al. 1998. Notes representing user-provided data were documented during observation sessions. An affinity diagram, a hierarchal organization of the notes based on common themes in the data, was created. Five graphical models were developed to help visualize the data including sequence, flow, artifact, physical, and cultural models. Results: A total of six pathologists were observed by a team of two researchers. A total of 254 affinity notes were documented and organized using a system based on topical hierarchy, including 75 third-level, 24 second-level, and five main-level categories, including technology, communication, synthesis/preparation, organization, and workflow. Current AP workflow was labor intensive and lacked scalability. A large number of processes that may possibly improve following the introduction of digital pathology systems were identified. These work processes included case management, case examination and review, and final case reporting. Furthermore, a digital slide system should integrate with the anatomic pathologic laboratory information system. Conclusions: To our knowledge, this is the first study that utilized the contextual inquiry method to document AP workflow. Findings were used to establish key requirements for the design of digital pathology systems. PMID:23243553
Neurosurgery contact handheld probe based on sapphire shaped crystal
NASA Astrophysics Data System (ADS)
Shikunova, I. A.; Stryukov, D. O.; Rossolenko, S. N.; Kiselev, A. M.; Kurlov, V. N.
2017-01-01
A handheld contact probe based on sapphire shaped crystal is developed for intraoperative spectrally-resolved optical diagnostics, laser coagulation and aspiration of malignant brain tissue. The technology was integrated into the neurosurgical workflow for intraoperative real-time identification and removing of invasive brain cancer.
Zhang, Yi; Monsen, Karen A; Adam, Terrence J; Pieczkiewicz, David S; Daman, Megan; Melton, Genevieve B
2011-01-01
Time and motion (T&M) studies provide an objective method to measure the expenditure of time by clinicians. While some instruments for T&M studies have been designed to evaluate health information technology (HIT), these instruments have not been designed for nursing workflow. We took an existing open source HIT T&M study application designed to evaluate physicians in the ambulatory setting and rationally adapted it through empiric observations to record nursing activities in the inpatient setting and linked this instrument to an existing interface terminology, the Omaha System. Nursing activities involved several dimensions and could include multiple activities occurring simultaneously, requiring significant instrument redesign. 94% of the activities from the study instrument mapped adequately to the Omaha System. T&M study instruments require customization in design optimize them for different environments, such as inpatient nursing, to enable optimal data collection. Interface terminologies show promise as a framework for recording and analyzing T&M study data. PMID:22195228
NASA Astrophysics Data System (ADS)
Bader, D. C.
2015-12-01
The Accelerated Climate Modeling for Energy (ACME) Project is concluding its first year. Supported by the Office of Science in the U.S. Department of Energy (DOE), its vision is to be "an ongoing, state-of-the-science Earth system modeling, modeling simulation and prediction project that optimizes the use of DOE laboratory resources to meet the science needs of the nation and the mission needs of DOE." Included in the "laboratory resources," is a large investment in computational, network and information technologies that will be utilized to both build better and more accurate climate models and broadly disseminate the data they generate. Current model diagnostic analysis and data dissemination technologies will not scale to the size of the simulations and the complexity of the models envisioned by ACME and other top tier international modeling centers. In this talk, the ACME Workflow component plans to meet these future needs will be described and early implementation examples will be highlighted.
Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.
Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor
2016-01-01
In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.
Slade Shantz, Jesse Alan; Veillette, Christian J. H.
2014-01-01
Wearable technology has become an important trend in consumer electronics in the past year. The miniaturization and mass production of myriad sensors have made possible the integration of sensors and output devices in wearable platforms. Despite the consumer focus of the wearable revolution some surgical applications are being developed. These fall into augmentative, assistive, and assessment functions and primarily layer onto current surgical workflows. Some challenges to the adoption of wearable technologies are discussed and a conceptual framework for understanding the potential of wearable technology to revolutionize surgical practice are presented. PMID:25593963
JTSA: an open source framework for time series abstractions.
Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana
2015-10-01
The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ESO Reflex: a graphical workflow engine for data reduction
NASA Astrophysics Data System (ADS)
Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo
ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.
Bleser, Gabriele; Damen, Dima; Behera, Ardhendu; Hendeby, Gustaf; Mura, Katharina; Miezal, Markus; Gee, Andrew; Petersen, Nils; Maçães, Gustavo; Domingues, Hugo; Gorecky, Dominic; Almeida, Luis; Mayol-Cuevas, Walterio; Calway, Andrew; Cohn, Anthony G.; Hogg, David C.; Stricker, Didier
2015-01-01
Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user’s pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system. PMID:26126116
SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.
Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B
2016-02-04
Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with limited bioinformatics experience to analyze their data and integrate next generation sequencing (NGS) technologies into the classroom. The SPARTA software and tutorial are available at sparta.readthedocs.org.
de Carvalho, Elias Cesar Araujo; Batilana, Adelia Portero; Claudino, Wederson; Reis, Luiz Fernando Lima; Schmerling, Rafael A; Shah, Jatin; Pietrobon, Ricardo
2012-01-01
With the exponential expansion of clinical trials conducted in (Brazil, Russia, India, and China) and VISTA (Vietnam, Indonesia, South Africa, Turkey, and Argentina) countries, corresponding gains in cost and enrolment efficiency quickly outpace the consonant metrics in traditional countries in North America and European Union. However, questions still remain regarding the quality of data being collected in these countries. We used ethnographic, mapping and computer simulation studies to identify/address areas of threat to near miss events for data quality in two cancer trial sites in Brazil. Two sites in Sao Paolo and Rio Janeiro were evaluated using ethnographic observations of workflow during subject enrolment and data collection. Emerging themes related to threats to near miss events for data quality were derived from observations. They were then transformed into workflows using UML-AD and modeled using System Dynamics. 139 tasks were observed and mapped through the ethnographic study. The UML-AD detected four major activities in the workflow evaluation of potential research subjects prior to signature of informed consent, visit to obtain subject́s informed consent, regular data collection sessions following study protocol and closure of study protocol for a given project. Field observations pointed to three major emerging themes: (a) lack of standardized process for data registration at source document, (b) multiplicity of data repositories and (c) scarcity of decision support systems at the point of research intervention. Simulation with policy model demonstrates a reduction of the rework problem. Patterns of threats to data quality at the two sites were similar to the threats reported in the literature for American sites. The clinical trial site managers need to reorganize staff workflow by using information technology more efficiently, establish new standard procedures and manage professionals to reduce near miss events and save time/cost. Clinical trial sponsors should improve relevant support systems.
Araujo de Carvalho, Elias Cesar; Batilana, Adelia Portero; Claudino, Wederson; Lima Reis, Luiz Fernando; Schmerling, Rafael A.; Shah, Jatin; Pietrobon, Ricardo
2012-01-01
Background With the exponential expansion of clinical trials conducted in (Brazil, Russia, India, and China) and VISTA (Vietnam, Indonesia, South Africa, Turkey, and Argentina) countries, corresponding gains in cost and enrolment efficiency quickly outpace the consonant metrics in traditional countries in North America and European Union. However, questions still remain regarding the quality of data being collected in these countries. We used ethnographic, mapping and computer simulation studies to identify/address areas of threat to near miss events for data quality in two cancer trial sites in Brazil. Methodology/Principal Findings Two sites in Sao Paolo and Rio Janeiro were evaluated using ethnographic observations of workflow during subject enrolment and data collection. Emerging themes related to threats to near miss events for data quality were derived from observations. They were then transformed into workflows using UML-AD and modeled using System Dynamics. 139 tasks were observed and mapped through the ethnographic study. The UML-AD detected four major activities in the workflow evaluation of potential research subjects prior to signature of informed consent, visit to obtain subject́s informed consent, regular data collection sessions following study protocol and closure of study protocol for a given project. Field observations pointed to three major emerging themes: (a) lack of standardized process for data registration at source document, (b) multiplicity of data repositories and (c) scarcity of decision support systems at the point of research intervention. Simulation with policy model demonstrates a reduction of the rework problem. Conclusions/Significance Patterns of threats to data quality at the two sites were similar to the threats reported in the literature for American sites. The clinical trial site managers need to reorganize staff workflow by using information technology more efficiently, establish new standard procedures and manage professionals to reduce near miss events and save time/cost. Clinical trial sponsors should improve relevant support systems. PMID:22768105
Implementation of Epic Beaker Anatomic Pathology at an Academic Medical Center.
Blau, John Larry; Wilford, Joseph D; Dane, Susan K; Karandikar, Nitin J; Fuller, Emily S; Jacobsmeier, Debbie J; Jans, Melissa A; Horning, Elisabeth A; Krasowski, Matthew D; Ford, Bradley A; Becker, Kent R; Beranek, Jeanine M; Robinson, Robert A
2017-01-01
Beaker is a relatively new laboratory information system (LIS) offered by Epic Systems Corporation as part of its suite of health-care software and bundled with its electronic medical record, EpicCare. It is divided into two modules, Beaker anatomic pathology (Beaker AP) and Beaker Clinical Pathology. In this report, we describe our experience implementing Beaker AP version 2014 at an academic medical center with a go-live date of October 2015. This report covers preimplementation preparations and challenges beginning in September 2014, issues discovered soon after go-live in October 2015, and some post go-live optimizations using data from meetings, debriefings, and the project closure document. We share specific issues that we encountered during implementation, including difficulties with the proposed frozen section workflow, developing a shared specimen source dictionary, and implementation of the standard Beaker workflow in large institution with trainees. We share specific strategies that we used to overcome these issues for a successful Beaker AP implementation. Several areas of the laboratory-required adaptation of the default Beaker build parameters to meet the needs of the workflow in a busy academic medical center. In a few areas, our laboratory was unable to use the Beaker functionality to support our workflow, and we have continued to use paper or have altered our workflow. In spite of several difficulties that required creative solutions before go-live, the implementation has been successful based on satisfaction surveys completed by pathologists and others who use the software. However, optimization of Beaker workflows has continued to be an ongoing process after go-live to the present time. The Beaker AP LIS can be successfully implemented at an academic medical center but requires significant forethought, creative adaptation, and continued shared management of the ongoing product by institutional and departmental information technology staff as well as laboratory managers to meet the needs of the laboratory.
NASA Astrophysics Data System (ADS)
Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.
2013-09-01
Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.
UBioLab: a web-laboratory for ubiquitous in-silico experiments.
Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo
2012-07-09
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Poon, Eric G; Keohane, Carol; Featherstone, Erica; Hays, Brandon; Dervan, Andrew; Woolf, Seth; Hayes, Judy; Bane, Anne; Newmark, Lisa; Gandhi, Tejal K
2006-01-01
In a time-motion study conducted in a hospital that recently implemented barcode medication administration (BCMA) technology, we found that the BCMA system did not increase the amount of time nurses spend on medication administration activities, and did not compromise the amount of time nurses spent on direct care of patients. Our results should allay concerns regarding the impact of BCMA on nursing workflow. PMID:17238684
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Vallenilla Ferrara, A. M.
2009-04-01
The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.
Contreras, Iván; Kiefer, Stephan; Vehi, Josep
2017-01-01
Diabetes self-management is a crucial element for all people with diabetes and those at risk for developing the disease. Diabetic patients should be empowered to increase their self-management skills in order to prevent or delay the complications of diabetes. This work presents the proposal and first development stages of a smartphone application focused on the empowerment of the patients with diabetes. The concept of this interventional tool is based on the personalization of the user experience from an adaptive and dynamic perspective. The segmentation of the population and the dynamical treatment of user profiles among the different experience levels is the main challenge of the implementation. The self-management assistant and remote treatment for diabetes aims to develop a platform to integrate a series of innovative models and tools rigorously tested and supported by the research literature in diabetes together the use of a proved engine to manage workflows for healthcare.
Generation of genome-modified Drosophila cell lines using SwAP.
Franz, Alexandra; Brunner, Erich; Basler, Konrad
2017-10-02
The ease of generating genetically modified animals and cell lines has been markedly increased by the recent development of the versatile CRISPR/Cas9 tool. However, while the isolation of isogenic cell populations is usually straightforward for mammalian cell lines, the generation of clonal Drosophila cell lines has remained a longstanding challenge, hampered by the difficulty of getting Drosophila cells to grow at low densities. Here, we describe a highly efficient workflow to generate clonal Cas9-engineered Drosophila cell lines using a combination of cell pools, limiting dilution in conditioned medium and PCR with allele-specific primers, enabling the efficient selection of a clonal cell line with a suitable mutation profile. We validate the protocol by documenting the isolation, selection and verification of eight independently Cas9-edited armadillo mutant Drosophila cell lines. Our method provides a powerful and simple workflow that improves the utility of Drosophila cells for genetic studies with CRISPR/Cas9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
System, apparatus and methods to implement high-speed network analyzers
Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E
2015-11-10
Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.
USDA-ARS?s Scientific Manuscript database
Advances in technologies associated with unmanned aerial vehicles (UAVs) has allowed for researchers, farmers and agribusinesses to incorporate UAVs coupled with various imaging systems into data collection activities and aid expert systems for making decisions. Multispectral imageries allow for a q...
Simulation software: engineer processes before reengineering.
Lepley, C J
2001-01-01
People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Genetic design automation: engineering fantasy or scientific renewal?
Lux, Matthew W; Bramlett, Brian W; Ball, David A; Peccoud, Jean
2012-02-01
The aim of synthetic biology is to make genetic systems more amenable to engineering, which has naturally led to the development of computer-aided design (CAD) tools. Experimentalists still primarily rely on project-specific ad hoc workflows instead of domain-specific tools, which suggests that CAD tools are lagging behind the front line of the field. Here, we discuss the scientific hurdles that have limited the productivity gains anticipated from existing tools. We argue that the real value of efforts to develop CAD tools is the formalization of genetic design rules that determine the complex relationships between genotype and phenotype. Copyright © 2011 Elsevier Ltd. All rights reserved.
Design Description for Team-Based Execution of Autonomous Missions (TEAM), Spiral 1
2008-11-18
TEAM), Spiral 1 Doc. #: Version: 1.0 Date: November 18, 2008 Page 12 of 39 Visualization Framework (WorldWind) Hibernate / Hibernate ...Spatial hibernate -properties XML Mapping WCS WFSWMS Enterprise Service Bus (Mule) Messaging, Data Transformation, Intelligent Routing Workflow Engine...government selected solutions. Neither these nor Mule® are deliverable, but the government may opt to use them if it so chooses. jBPM, java Business
Information Management Workflow and Tools Enabling Multiscale Modeling Within ICME Paradigm
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Bednarcyk, Brett A.; Austin, Nic; Terentjev, Igor; Cebon, Dave; Marsden, Will
2016-01-01
With the increased emphasis on reducing the cost and time to market of new materials, the need for analytical tools that enable the virtual design and optimization of materials throughout their processing - internal structure - property - performance envelope, along with the capturing and storing of the associated material and model information across its lifecycle, has become critical. This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Fortunately, material information management systems and physics-based multiscale modeling methods have kept pace with the growing user demands. Herein, recent efforts to establish workflow for and demonstrate a unique set of web application tools for linking NASA GRC's Integrated Computational Materials Engineering (ICME) Granta MI database schema and NASA GRC's Integrated multiscale Micromechanics Analysis Code (ImMAC) software toolset are presented. The goal is to enable seamless coupling between both test data and simulation data, which is captured and tracked automatically within Granta MI®, with full model pedigree information. These tools, and this type of linkage, are foundational to realizing the full potential of ICME, in which materials processing, microstructure, properties, and performance are coupled to enable application-driven design and optimization of materials and structures.
NASA Astrophysics Data System (ADS)
Orlov, M. Yu; Lukachev, S. V.; Anisimov, V. M.
2018-01-01
The position of combustion chamber between compressor and turbine and combined action of these elements imply that the working processes of all these elements are interconnected. One of the main requirements of the combustion chamber is the formation of the desirable temperature field at the turbine inlet, which can realize necessary durability of nozzle assembly and blade wheel of the first stage of high-pressure turbine. The method of integrated simulation of combustion chamber and neighboring nodes (compressor and turbine) was developed. On the first stage of the study, this method was used to investigate the influence of non-uniformity of flow distribution, occurred after compressor blades on combustion chamber workflow. The goal of the study is to assess the impact of non-uniformity of flow distribution after the compressor on the parameters before the turbine. The calculation was carried out in a transient case for some operation mode of the engine. The simulation showed that the inclusion of compressor has an effect on combustion chamber workflow and allows us to determine temperature field at the turbine inlet and assesses its durability more accurately. In addition, the simulation with turbine showed the changes in flow velocity distribution and pressure in combustion chamber.
Liu, Ming-Qi; Zeng, Wen-Feng; Fang, Pan; Cao, Wei-Qian; Liu, Chao; Yan, Guo-Quan; Zhang, Yang; Peng, Chao; Wu, Jian-Qiang; Zhang, Xiao-Jin; Tu, Hui-Jun; Chi, Hao; Sun, Rui-Xiang; Cao, Yong; Dong, Meng-Qiu; Jiang, Bi-Yun; Huang, Jiang-Ming; Shen, Hua-Li; Wong, Catherine C L; He, Si-Min; Yang, Peng-Yuan
2017-09-05
The precise and large-scale identification of intact glycopeptides is a critical step in glycoproteomics. Owing to the complexity of glycosylation, the current overall throughput, data quality and accessibility of intact glycopeptide identification lack behind those in routine proteomic analyses. Here, we propose a workflow for the precise high-throughput identification of intact N-glycopeptides at the proteome scale using stepped-energy fragmentation and a dedicated search engine. pGlyco 2.0 conducts comprehensive quality control including false discovery rate evaluation at all three levels of matches to glycans, peptides and glycopeptides, improving the current level of accuracy of intact glycopeptide identification. The N-glycoproteome of samples metabolically labeled with 15 N/ 13 C were analyzed quantitatively and utilized to validate the glycopeptide identification, which could be used as a novel benchmark pipeline to compare different search engines. Finally, we report a large-scale glycoproteome dataset consisting of 10,009 distinct site-specific N-glycans on 1988 glycosylation sites from 955 glycoproteins in five mouse tissues.Protein glycosylation is a heterogeneous post-translational modification that generates greater proteomic diversity that is difficult to analyze. Here the authors describe pGlyco 2.0, a workflow for the precise one step identification of intact N-glycopeptides at the proteome scale.
Process improvement of pap smear tracking in a women's medicine center clinic in residency training.
Calhoun, Byron C; Goode, Jeff; Simmons, Kathy
2011-11-01
Application of Six-Sigma methodology and Change Acceleration Process (CAP)/Work Out (WO) tools to track pap smear results in an outpatient clinic in a hospital-based residency-training program. Observational study of impact of changes obtained through application of Six-Sigma principles in clinic process with particular attention to prevention of sentinel events. Using cohort analysis and applying Six-Sigma principles to an interactive electronic medical record Soarian workflow engine, we designed a system of timely accession and reporting of pap smear and pathology results. We compared manual processes from January 1, 2007 to February 28, 2008 to automated processes from March 1, 2008 to December 31, 2009. Using the Six-Sigma principles, CAP/WO tools, including "voice of the customer" and team focused approach, no outlier events went untracked. Applying the Soarian workflow engine to track prescribed 7 day turnaround time for completion, we identified 148 pap results in 3,936, 3 non-gynecological results in 15, and 41 surgical results in 246. We applied Six-Sigma principles to an outpatient clinic facilitating an interdisciplinary team approach to improve the clinic's reporting system. Through focused problem assessment, verification of process, and validation of outcomes, we improved patient care for pap smears and critical pathology. © 2011 National Association for Healthcare Quality.
Design of a decision-support architecture for management of remotely monitored patients.
Basilakis, Jim; Lovell, Nigel H; Redmond, Stephen J; Celler, Branko G
2010-09-01
Telehealth is the provision of health services at a distance. Typically, this occurs in unsupervised or remote environments, such as a patient's home. We describe one such telehealth system and the integration of extracted clinical measurement parameters with a decision-support system (DSS). An enterprise application-server framework, combined with a rules engine and statistical analysis tools, is used to analyze the acquired telehealth data, searching for trends and shifts in parameter values, as well as identifying individual measurements that exceed predetermined or adaptive thresholds. An overarching business process engine is used to manage the core DSS knowledge base and coordinate workflow outputs of the DSS. The primary role for such a DSS is to provide an effective means to reduce the data overload and to provide a means of health risk stratification to allow appropriate targeting of clinical resources to best manage the health of the patient. In this way, the system may ultimately influence changes in workflow by targeting scarce clinical resources to patients of most need. A single case study extracted from an initial pilot trial of the system, in patients with chronic obstructive pulmonary disease and chronic heart failure, will be reviewed to illustrate the potential benefit of integrating telehealth and decision support in the management of both acute and chronic disease.
Technology-driven dietary assessment: a software developer’s perspective
Buday, Richard; Tapia, Ramsey; Maze, Gary R.
2015-01-01
Dietary researchers need new software to improve nutrition data collection and analysis, but creating information technology is difficult. Software development projects may be unsuccessful due to inadequate understanding of needs, management problems, technology barriers or legal hurdles. Cost overruns and schedule delays are common. Barriers facing scientific researchers developing software include workflow, cost, schedule, and team issues. Different methods of software development and the role that intellectual property rights play are discussed. A dietary researcher must carefully consider multiple issues to maximize the likelihood of success when creating new software. PMID:22591224
UK Health and Social Care Case Studies: Iterative Technology Development.
Blanchard, Adie; Gilbert, Laura; Dawson, Tom
2017-01-01
As a result of increasing demand in the face of reducing resources, technology has been implemented in many social and health care services to improve service efficiency. This paper outlines the experiences of deploying a 'Software as a Service' application in the UK social and health care sectors. The case studies demonstrate that every implementation is different, and unique to each organisation. Technology design and integration can be facilitated by ongoing engagement and collaboration with all stakeholders, flexible design, and attention to interoperability to suit services and their workflows.
SECIMTools: a suite of metabolomics data analysis tools.
Kirpich, Alexander S; Ibarra, Miguel; Moskalenko, Oleksandr; Fear, Justin M; Gerken, Joseph; Mi, Xinlei; Ashrafi, Ali; Morse, Alison M; McIntyre, Lauren M
2018-04-20
Metabolomics has the promise to transform the area of personalized medicine with the rapid development of high throughput technology for untargeted analysis of metabolites. Open access, easy to use, analytic tools that are broadly accessible to the biological community need to be developed. While technology used in metabolomics varies, most metabolomics studies have a set of features identified. Galaxy is an open access platform that enables scientists at all levels to interact with big data. Galaxy promotes reproducibility by saving histories and enabling the sharing workflows among scientists. SECIMTools (SouthEast Center for Integrated Metabolomics) is a set of Python applications that are available both as standalone tools and wrapped for use in Galaxy. The suite includes a comprehensive set of quality control metrics (retention time window evaluation and various peak evaluation tools), visualization techniques (hierarchical cluster heatmap, principal component analysis, modular modularity clustering), basic statistical analysis methods (partial least squares - discriminant analysis, analysis of variance, t-test, Kruskal-Wallis non-parametric test), advanced classification methods (random forest, support vector machines), and advanced variable selection tools (least absolute shrinkage and selection operator LASSO and Elastic Net). SECIMTools leverages the Galaxy platform and enables integrated workflows for metabolomics data analysis made from building blocks designed for easy use and interpretability. Standard data formats and a set of utilities allow arbitrary linkages between tools to encourage novel workflow designs. The Galaxy framework enables future data integration for metabolomics studies with other omics data.
Standardized protocols for quality control of MRM-based plasma proteomic workflows.
Percy, Andrew J; Chambers, Andrew G; Smith, Derek S; Borchers, Christoph H
2013-01-04
Mass spectrometry (MS)-based proteomics is rapidly emerging as a viable technology for the identification and quantitation of biological samples, such as human plasma--the most complex yet commonly employed biofluid in clinical analyses. The transition from a qualitative to quantitative science is required if proteomics is going to successfully make the transition to a clinically useful technique. MS, however, has been criticized for a lack of reproducibility and interlaboratory transferability. Currently, the MS and plasma proteomics communities lack standardized protocols and reagents to ensure that high-quality quantitative data can be accurately and precisely reproduced by laboratories across the world using different MS technologies. Toward addressing this issue, we have developed standard protocols for multiple reaction monitoring (MRM)-based assays with customized isotopically labeled internal standards for quality control of the sample preparation workflow and the MS platform in quantitative plasma proteomic analyses. The development of reference standards and their application to a single MS platform is discussed herein, along with the results from intralaboratory tests. The tests highlighted the importance of the reference standards in assessing the efficiency and reproducibility of the entire bottom-up proteomic workflow and revealed errors related to the sample preparation and performance quality and deficits of the MS and LC systems. Such evaluations are necessary if MRM-based quantitative plasma proteomics is to be used in verifying and validating putative disease biomarkers across different research laboratories and eventually in clinical laboratories.
Designing Health Information Technology Tools to Prevent Gaps in Public Health Insurance.
Hall, Jennifer D; Harding, Rose L; DeVoe, Jennifer E; Gold, Rachel; Angier, Heather; Sumic, Aleksandra; Nelson, Christine A; Likumahuwa-Ackman, Sonja; Cohen, Deborah J
2017-06-23
Changes in health insurance policies have increased coverage opportunities, but enrollees are required to annually reapply for benefits which, if not managed appropriately, can lead to insurance gaps. Electronic health records (EHRs) can automate processes for assisting patients with health insurance enrollment and re-enrollment. We describe community health centers' (CHC) workflow, documentation, and tracking needs for assisting families with insurance application processes, and the health information technology (IT) tool components that were developed to meet those needs. We conducted a qualitative study using semi-structured interviews and observation of clinic operations and insurance application assistance processes. Data were analyzed using a grounded theory approach. We diagramed workflows and shared information with a team of developers who built the EHR-based tools. Four steps to the insurance assistance workflow were common among CHCs: 1) Identifying patients for public health insurance application assistance; 2) Completing and submitting the public health insurance application when clinic staff met with patients to collect requisite information and helped them apply for benefits; 3) Tracking public health insurance approval to monitor for decisions; and 4) assisting with annual health insurance reapplication. We developed EHR-based tools to support clinical staff with each of these steps. CHCs are uniquely positioned to help patients and families with public health insurance applications. CHCs have invested in staff to assist patients with insurance applications and help prevent coverage gaps. To best assist patients and to foster efficiency, EHR based insurance tools need comprehensive, timely, and accurate health insurance information.
Jensen, Roxanne E; Rothrock, Nan E; DeWitt, Esi M; Spiegel, Brennan; Tucker, Carole A; Crane, Heidi M; Forrest, Christopher B; Patrick, Donald L; Fredericksen, Rob; Shulman, Lisa M; Cella, David; Crane, Paul K
2015-02-01
Patient-reported outcomes (PROs) are gaining recognition as key measures for improving the quality of patient care in clinical care settings. Three factors have made the implementation of PROs in clinical care more feasible: increased use of modern measurement methods in PRO design and validation, rapid progression of technology (eg, touchscreen tablets, Internet accessibility, and electronic health records), and greater demand for measurement and monitoring of PROs by regulators, payers, accreditors, and professional organizations. As electronic PRO collection and reporting capabilities have improved, the challenges of collecting PRO data have changed. To update information on PRO adoption considerations in clinical care, highlighting electronic and technical advances with respect to measure selection, clinical workflow, data infrastructure, and outcomes reporting. Five practical case studies across diverse health care settings and patient populations are used to explore how implementation barriers were addressed to promote the successful integration of PRO collection into the clinical workflow. The case studies address selecting and reporting of relevant content, workflow integration, previsit screening, effective evaluation, and electronic health record integration. These case studies exemplify elements of well-designed electronic systems, including response automation, tailoring of item selection and reporting algorithms, flexibility of collection location, and integration with patient health care data elements. They also highlight emerging logistical barriers in this area, such as the need for specialized technological and methodological expertise, and design limitations of current electronic data capture systems.
Jensen, Roxanne E.; Rothrock, Nan E.; DeWitt, Esi Morgan; Spiegel, Brennan; Tucker, Carole A.; Crane, Heidi M.; Forrest, Christopher B.; Patrick, Donald L.; Fredericksen, Rob; Shulman, Lisa M.; Cella, David; Crane, Paul K.
2016-01-01
Background Patient-reported outcomes (PROs) are gaining recognition as key measures for improving the quality of patient care in clinical care settings. Three factors have made the implementation of PROs in clinical care more feasible: increased use of modern measurement methods in PRO design and validation, rapid progression of technology (e.g., touch screen tablets, Internet accessibility, and electronic health records (EHRs)), and greater demand for measurement and monitoring of PROs by regulators, payers, accreditors, and professional organizations. As electronic PRO collection and reporting capabilities have improved, the challenges of collecting PRO data have changed. Objectives To update information on PRO adoption considerations in clinical care, highlighting electronic and technical advances with respect to measure selection, clinical workflow, data infrastructure, and outcomes reporting. Methods Five practical case studies across diverse healthcare settings and patient populations are used to explore how implementation barriers were addressed to promote the successful integration of PRO collection into the clinical workflow. The case studies address selecting and reporting of relevant content, workflow integration, pre-visit screening, effective evaluation, and EHR integration. Conclusions These case studies exemplify elements of well-designed electronic systems, including response automation, tailoring of item selection and reporting algorithms, flexibility of collection location, and integration with patient health care data elements. They also highlight emerging logistical barriers in this area, such as the need for specialized technological and methodological expertise, and design limitations of current electronic data capture systems. PMID:25588135
Virtual Geophysics Laboratory: Exploiting the Cloud and Empowering Geophysicsts
NASA Astrophysics Data System (ADS)
Fraser, Ryan; Vote, Josh; Goh, Richard; Cox, Simon
2013-04-01
Over the last five decades geoscientists from Australian state and federal agencies have collected and assembled around 3 Petabytes of geoscience data sets under public funding. As a consequence of technological progress, data is now being acquired at exponential rates and in higher resolution than ever before. Effective use of these big data sets challenges the storage and computational infrastructure of most organizations. The Virtual Geophysics Laboratory (VGL) is a scientific workflow portal addresses some of the resulting issues by providing Australian geophysicists with access to a Web 2.0 or Rich Internet Application (RIA) based integrated environment that exploits eResearch tools and Cloud computing technology, and promotes collaboration between the user community. VGL simplifies and automates large portions of what were previously manually intensive scientific workflow processes, allowing scientists to focus on the natural science problems, rather than computer science and IT. A number of geophysical processing codes are incorporated to support multiple workflows. For example a gravity inversion can be performed by combining the Escript/Finley codes (from the University of Queensland) with the gravity data registered in VGL. Likewise, tectonic processes can also be modeled by combining the Underworld code (from Monash University) with one of the various 3D models available to VGL. Cloud services provide scalable and cost effective compute resources. VGL is built on top of mature standards-compliant information services, many deployed using the Spatial Information Services Stack (SISS), which provides direct access to geophysical data. A large number of data sets from Geoscience Australia assist users in data discovery. GeoNetwork provides a metadata catalog to store workflow results for future use, discovery and provenance tracking. VGL has been developed in collaboration with the research community using incremental software development practices and open source tools. While developed to provide the geophysics research community with a sustainable platform and scalable infrastructure; VGL has also developed a number of concepts, patterns and generic components of which have been reused for cases beyond geophysics, including natural hazards, satellite processing and other areas requiring spatial data discovery and processing. Future plans for VGL include a number of improvements in both functional and non-functional areas in response to its user community needs and advancement in information technologies. In particular, research is underway in the following areas (a) distributed and parallel workflow processing in the cloud, (b) seamless integration with various cloud providers, and (c) integration with virtual laboratories representing other science domains. Acknowledgements: VGL was developed by CSIRO in collaboration with Geoscience Australia, National Computational Infrastructure, Australia National University, Monash University and University of Queensland, and has been supported by the Australian Government's Education Investment Funds through NeCTAR.
Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows
NASA Astrophysics Data System (ADS)
Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.
2014-12-01
The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.
NG6: Integrated next generation sequencing storage and processing environment.
Mariette, Jérôme; Escudié, Frédéric; Allias, Nicolas; Salin, Gérald; Noirot, Céline; Thomas, Sylvain; Klopp, Christophe
2012-09-09
Next generation sequencing platforms are now well implanted in sequencing centres and some laboratories. Upcoming smaller scale machines such as the 454 junior from Roche or the MiSeq from Illumina will increase the number of laboratories hosting a sequencer. In such a context, it is important to provide these teams with an easily manageable environment to store and process the produced reads. We describe a user-friendly information system able to manage large sets of sequencing data. It includes, on one hand, a workflow environment already containing pipelines adapted to different input formats (sff, fasta, fastq and qseq), different sequencers (Roche 454, Illumina HiSeq) and various analyses (quality control, assembly, alignment, diversity studies,…) and, on the other hand, a secured web site giving access to the results. The connected user will be able to download raw and processed data and browse through the analysis result statistics. The provided workflows can easily be modified or extended and new ones can be added. Ergatis is used as a workflow building, running and monitoring system. The analyses can be run locally or in a cluster environment using Sun Grid Engine. NG6 is a complete information system designed to answer the needs of a sequencing platform. It provides a user-friendly interface to process, store and download high-throughput sequencing data.
Initial steps towards a production platform for DNA sequence analysis on the grid.
Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D
2010-12-14
Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/
Routine Digital Pathology Workflow: The Catania Experience
Fraggetta, Filippo; Garozzo, Salvatore; Zannoni, Gian Franco; Pantanowitz, Liron; Rossi, Esther Diana
2017-01-01
Introduction: Successful implementation of whole slide imaging (WSI) for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. Methods: All (100%) permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D) barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Results: Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Conclusion: Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory. PMID:29416914
Routine Digital Pathology Workflow: The Catania Experience.
Fraggetta, Filippo; Garozzo, Salvatore; Zannoni, Gian Franco; Pantanowitz, Liron; Rossi, Esther Diana
2017-01-01
Successful implementation of whole slide imaging (WSI) for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. All (100%) permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D) barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory.
Agile parallel bioinformatics workflow management using Pwrake.
Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro
2011-09-08
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.
Agile parallel bioinformatics workflow management using Pwrake
2011-01-01
Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows. PMID:21899774
2009-12-01
Business Process Modeling BPMN Business Process Modeling Notation SoA Service-oriented Architecture UML Unified Modeling Language CSP...system developers. Supporting technologies include Business Process Modeling Notation ( BPMN ), Unified Modeling Language (UML), model-driven architecture
Automation and workflow considerations for embedding Digimarc Barcodes at scale
NASA Astrophysics Data System (ADS)
Rodriguez, Tony; Haaga, Don; Calhoon, Sean
2015-03-01
The Digimarc® Barcode is a digital watermark applied to packages and variable data labels that carries GS1 standard GTIN-14 data traditionally carried by a 1-D barcode. The Digimarc Barcode can be read with smartphones and imaging-based barcode readers commonly used in grocery and retail environments. Using smartphones, consumers can engage with products and retailers can materially increase the speed of check-out, increasing store margins and providing a better experience for shoppers. Internal testing has shown an average of 53% increase in scanning throughput, enabling 100's of millions of dollars in cost savings [1] for retailers when deployed at scale. To get to scale, the process of embedding a digital watermark must be automated and integrated within existing workflows. Creating the tools and processes to do so represents a new challenge for the watermarking community. This paper presents a description and an analysis of the workflow implemented by Digimarc to deploy the Digimarc Barcode at scale. An overview of the tools created and lessons learned during the introduction of technology to the market are provided.
XML schemas for common bioinformatic data types and their application in workflow systems
Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert
2006-01-01
Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823
Summary of the white paper of DICOM WG24 'DICOM in Surgery'
NASA Astrophysics Data System (ADS)
Lemke, Heinz U.
2007-03-01
Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient Operating Room (OR). The DICOM Working Group 24 (WG24) has been established to develop DICOM objects and services related to Image Guided Surgery (IGS). To determine these standards, it is important to define day-to-day, step-by-step surgical workflow practices and create surgery workflow models per procedures or per variable cases. A well-defined workflow and a high fidelity patient model will be the base of activities for both, radiation therapy and surgery. Considering the present and future requirements for surgical planning and intervention, such a patient model must be n-dimensional, were n may include the spatial and temporal dimensions as well as a number of functional variables. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG24 should, therefore, also be to serve the therapeutic disciplines by enabling modelling technology to be based on standards.
Pegasus Workflow Management System: Helping Applications From Earth and Space
NASA Astrophysics Data System (ADS)
Mehta, G.; Deelman, E.; Vahi, K.; Silva, F.
2010-12-01
Pegasus WMS is a Workflow Management System that can manage large-scale scientific workflows across Grid, local and Cloud resources simultaneously. Pegasus WMS provides a means for representing the workflow of an application in an abstract XML form, agnostic of the resources available to run it and the location of data and executables. It then compiles these workflows into concrete plans by querying catalogs and farming computations across local and distributed computing resources, as well as emerging commercial and community cloud environments in an easy and reliable manner. Pegasus WMS optimizes the execution as well as data movement by leveraging existing Grid and cloud technologies via a flexible pluggable interface and provides advanced features like reusing existing data, automatic cleanup of generated data, and recursive workflows with deferred planning. It also captures all the provenance of the workflow from the planning stage to the execution of the generated data, helping scientists to accurately measure performance metrics of their workflow as well as data reproducibility issues. Pegasus WMS was initially developed as part of the GriPhyN project to support large-scale high-energy physics and astrophysics experiments. Direct funding from the NSF enabled support for a wide variety of applications from diverse domains including earthquake simulation, bacterial RNA studies, helioseismology and ocean modeling. Earthquake Simulation: Pegasus WMS was recently used in a large scale production run in 2009 by the Southern California Earthquake Centre to run 192 million loosely coupled tasks and about 2000 tightly coupled MPI style tasks on National Cyber infrastructure for generating a probabilistic seismic hazard map of the Southern California region. SCEC ran 223 workflows over a period of eight weeks, using on average 4,420 cores, with a peak of 14,540 cores. A total of 192 million files were produced totaling about 165TB out of which 11TB of data was saved. Astrophysics: The Laser Interferometer Gravitational-Wave Observatory (LIGO) uses Pegasus WMS to search for binary inspiral gravitational waves. A month of LIGO data requires many thousands of jobs, running for days on hundreds of CPUs on the LIGO Data Grid (LDG) and Open Science Grid (OSG). Ocean Temperature Forecast: Researchers at the Jet Propulsion Laboratory are exploring Pegasus WMS to run ocean forecast ensembles of the California coastal region. These models produce a number of daily forecasts for water temperature, salinity, and other measures. Helioseismology: The Solar Dynamics Observatory (SDO) is NASA's most important solar physics mission of this coming decade. Pegasus WMS is being used to analyze the data from SDO, which will be predominantly used to learn about solar magnetic activity and to probe the internal structure and dynamics of the Sun with helioseismology. Bacterial RNA studies: SIPHT is an application in bacterial genomics, which predicts sRNA (small non-coding RNAs)-encoding genes in bacteria. This project currently provides a web-based interface using Pegasus WMS at the backend to facilitate large-scale execution of the workflows on varied resources and provide better notifications of task/workflow completion.
Semantics-enabled service discovery framework in the SIMDAT pharma grid.
Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert
2008-03-01
We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.
Que, Syril Keena T; Grant-Kels, Jane M; Longo, Caterina; Pellacani, Giovanni
2016-10-01
The use of reflectance confocal microscopy (RCM) and other noninvasive imaging devices can potentially streamline clinical care, leading to more precise and efficient management of skin cancer. This article explores the potential role of RCM in cutaneous oncology, as an adjunct to more established techniques of detecting and monitoring for skin cancer, such as dermoscopy and total body photography. Discussed are current barriers to the adoption of RCM, diagnostic workflows and standards of care in the United States and Europe, and medicolegal issues. The potential role of RCM and other similar technological innovations in the enhancement of dermatologic care is evaluated. Copyright © 2016 Elsevier Inc. All rights reserved.
Digital devices: big challenge in color management
NASA Astrophysics Data System (ADS)
Vauderwange, Oliver; Curticapean, Dan; Dreβler, Paul; Wozniak, Peter
2014-09-01
The paper will present how the students learn to find technical solutions in color management by using adequate digital devices and recognize the specific upcoming tasks in this area. Several issues, problems and their solutions will be discussed. The scientific background offer specific didactical solutions in this area of optics. Color management is the major item of this paper. Color management is a crucial responsibility for media engineers and designers. Print, screen and mobile applications must independently display the same colors. Predictability and consistency in the color representation are the aims of a color management system. This is only possible in a standardized and audited production workflow. Nowadays digital media have a fast-paced development process. An increasing number of different digital devices with different display sizes and display technologies are a great challenge for every color management system. The authors will present their experience in the field of color management. The design and development of a suitable learning environment with the required infrastructure is in the focus. The combination of theoretical and practical lectures creates a deeper understanding in the area of the digital color representation.
Embedded Multimaterial Extrusion Bioprinting.
Rocca, Marco; Fragasso, Alessio; Liu, Wanjun; Heinrich, Marcel A; Zhang, Yu Shrike
2018-04-01
Embedded extrusion bioprinting allows for the generation of complex structures that otherwise cannot be achieved with conventional layer-by-layer deposition from the bottom, by overcoming the limits imposed by gravitational force. By taking advantage of a hydrogel bath, serving as a sacrificial printing environment, it is feasible to extrude a bioink in freeform until the entire structure is deposited and crosslinked. The bioprinted structure can be subsequently released from the supporting hydrogel and used for further applications. Combining this advanced three-dimensional (3D) bioprinting technique with a multimaterial extrusion printhead setup enables the fabrication of complex volumetric structures built from multiple bioinks. The work described in this paper focuses on the optimization of the experimental setup and proposes a workflow to automate the bioprinting process, resulting in a fast and efficient conversion of a virtual 3D model into a physical, extruded structure in freeform using the multimaterial embedded bioprinting system. It is anticipated that further development of this technology will likely lead to widespread applications in areas such as tissue engineering, pharmaceutical testing, and organs-on-chips.
3D Perception Technologies for Surgical Operating Theatres.
Beyl, T; Schreiter, L; Nicolai, P; Raczkowsky, J; Wörn, H
2016-01-01
3D Perception technologies have been explored in various fields. This paper explores the application of such technologies for surgical operating theatres. Clinical applications can be found in workflow detection, tracking and analysis, collision avoidance with medical robots, perception of interaction between participants of the operation, training of the operation room crew, patient calibration and many more. In this paper a complete perception solution for the operating room is shown. The system is based on the ToF technology integrated to the Microsoft Kinect One implements a multi camera approach. Special emphasize is put on the tracking of the personnel and the evaluation of the system performance and accuracy.
Lesselroth, Blake J; Adams, Kathleen; Tallett, Stephanie; Wood, Scott D; Keeling, Amy; Cheng, Karen; Church, Victoria L; Felder, Robert; Tran, Hanna
2013-01-01
Our objectives were to (1) develop an in-depth understanding of the workflow and information flow in medication reconciliation, and (2) design medication reconciliation support technology using a combination of rapid-cycle prototyping and human-centered design. Although medication reconciliation is a national patient safety goal, limitations both of physical environment and in workflow can make it challenging to implement durable systems. We used several human factors techniques to gather requirements and develop a new process to collect a medication history at hospital admission. We completed an ethnography and time and motion analysis of pharmacists in order to illustrate the processes used to reconcile medications. We then used the requirements to design prototype multimedia software for collecting a bedside medication history. We observed how pharmacists incorporated the technology into their physical environment and documented usability issues. Admissions occurred in three phases: (1) list compilation, (2) order processing, and (3) team coordination. Current medication reconciliation processes at the hospital average 19 minutes to complete and do not include a bedside interview. Use of our technology during a bedside interview required an average of 29 minutes. The software represents a viable proof-of-concept to automate parts of history collection and enhance patient communication. However, we discovered several usability issues that require attention. We designed a patient-centered technology to enhance how clinicians collect a patient's medication history. By using multiple human factors methods, our research team identified system themes and design constraints that influence the quality of the medication reconciliation process and implementation effectiveness of new technology. Evidence-based design, human factors, patient-centered care, safety, technology.
ballaxy: web services for structural bioinformatics.
Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas
2015-01-01
Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling. ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Rynge, M.; Juve, G.; Kinney, J.; Good, J.; Berriman, B.; Merrihew, A.; Deelman, E.
2014-05-01
In this paper, we describe how to leverage cloud resources to generate large-scale mosaics of the galactic plane in multiple wavelengths. Our goal is to generate a 16-wavelength infrared Atlas of the Galactic Plane at a common spatial sampling of 1 arcsec, processed so that they appear to have been measured with a single instrument. This will be achieved by using the Montage image mosaic engine process observations from the 2MASS, GLIMPSE, MIPSGAL, MSX and WISE datasets, over a wavelength range of 1 μm to 24 μm, and by using the Pegasus Workflow Management System for managing the workload. When complete, the Atlas will be made available to the community as a data product. We are generating images that cover ±180° in Galactic longitude and ±20° in Galactic latitude, to the extent permitted by the spatial coverage of each dataset. Each image will be 5°x5° in size (including an overlap of 1° with neighboring tiles), resulting in an atlas of 1,001 images. The final size will be about 50 TBs. This paper will focus on the computational challenges, solutions, and lessons learned in producing the Atlas. To manage the computation we are using the Pegasus Workflow Management System, a mature, highly fault-tolerant system now in release 4.2.2 that has found wide applicability across many science disciplines. A scientific workflow describes the dependencies between the tasks and in most cases the workflow is described as a directed acyclic graph, where the nodes are tasks and the edges denote the task dependencies. A defining property for a scientific workflow is that it manages data flow between tasks. Applied to the galactic plane project, each 5 by 5 mosaic is a Pegasus workflow. Pegasus is used to fetch the source images, execute the image mosaicking steps of Montage, and store the final outputs in a storage system. As these workflows are very I/O intensive, care has to be taken when choosing what infrastructure to execute the workflow on. In our setup, we choose to use dynamically provisioned compute clusters running on the Amazon Elastic Compute Cloud (EC2). All our instances are using the same base image, which is configured to come up as a master node by default. The master node is a central instance from where the workflow can be managed. Additional worker instances are provisioned and configured to accept work assignments from the master node. The system allows for adding/removing workers in an ad hoc fashion, and could be run in large configurations. To-date we have performed 245,000 CPU hours of computing and generated 7,029 images and totaling 30 TB. With the current set up our runtime would be 340,000 CPU hours for the whole project. Using spot m2.4xlarge instances, the cost would be approximately $5,950. Using faster AWS instances, such as cc2.8xlarge could potentially decrease the total CPU hours and further reduce the compute costs. The paper will explore these tradeoffs.
Neumann, Martin Horst Dieter; Schneck, Helen; Decker, Yvonne; Schömer, Susanne; Franken, André; Endris, Volker; Pfarr, Nicole; Weichert, Wilko; Niederacher, Dieter; Fehm, Tanja; Neubauer, Hans
2017-01-01
Circulating tumor cells (CTC) are rare cells which have left the primary tumor to enter the blood stream. Although only a small CTC subgroup is capable of extravasating, the presence of CTCs is associated with an increased risk of metastasis and a shorter overall survival. Understanding the heterogeneous CTC biology will optimize treatment decisions and will thereby improve patient outcome. For this, robust workflows for detection and isolation of CTCs are urgently required. Here, we present a workflow to characterize CTCs by combining the advantages of both the CellSearch ® and the CellCelector™ micromanipulation system. CTCs were isolated from CellSearch ® cartridges using the CellCelector™ system and were deposited into PCR tubes for subsequent molecular analysis (whole genome amplification (WGA) and massive parallel multigene sequencing). By a CellCelector™ screen we reidentified 97% of CellSearch ® SKBR-3 cells. Furthermore, we isolated 97% of CellSearch ® -proven patient CTCs using the CellCelector™ system. Therein, we found an almost perfect correlation of R 2 = 0.98 (Spearman's rho correlation, n = 20, p < 0.00001) between the CellSearch ® CTC count (n = 271) and the CellCelector™ detected CTCs (n = 252). Isolated CTCs were analyzed by WGA and massive parallel multigene sequencing. In total, single nucleotide polymorphisms (SNPs) could be detected in 50 genes in seven CTCs, 12 MCF-7, and 3 T47D cells, respectively. Taken together, CTC quantification via the CellCelector™ system ensures a comprehensive detection of CTCs preidentified by the CellSearch ® system. Moreover, the isolation of CTCs after CellSearch ® using the CellCelector™ system guarantees for CTC enrichment without any contaminants enabling subsequent high throughput genomic analyses on single cell level. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:125-132, 2017. © 2016 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Masek, J.; Rao, A.; Gao, F.; Davis, P.; Jackson, G.; Huang, C.; Weinstein, B.
2008-12-01
The Land Cover Change Community-based Processing and Analysis System (LC-ComPS) combines grid technology, existing science modules, and dynamic workflows to enable users to complete advanced land data processing on data available from local and distributed archives. Changes in land cover represent a direct link between human activities and the global environment, and in turn affect Earth's climate. Thus characterizing land cover change has become a major goal for Earth observation science. Many science algorithms exist to generate new products (e.g., surface reflectance, change detection) used to study land cover change. The overall objective of the LC-ComPS is to release a set of tools and services to the land science community that can be implemented as a flexible LC-ComPS to produce surface reflectance and land-cover change information with ground resolution on the order of Landsat-class instruments. This package includes software modules for pre-processing Landsat-type satellite imagery (calibration, atmospheric correction, orthorectification, precision registration, BRDF correction) for performing land-cover change analysis and includes pre-built workflow chains to automatically generate surface reflectance and land-cover change products based on user input. In order to meet the project objectives, the team created the infrastructure (i.e., client-server system with graphical and machine interfaces) to expand the use of these existing science algorithm capabilities in a community with distributed, large data archives and processing centers. Because of the distributed nature of the user community, grid technology was chosen to unite the dispersed community resources. At that time, grid computing was not used consistently and operationally within the Earth science research community. Therefore, there was a learning curve to configure and implement the underlying public key infrastructure (PKI) interfaces, required for the user authentication, secure file transfer and remote job execution on the grid network of machines. In addition, science support was needed to vet that the grid technology did not have any adverse affects of the science module outputs. Other open source, unproven technologies, such as a workflow package to manage jobs submitted by the user, were infused into the overall system with successful results. This presentation will discuss the basic capabilities of LC-ComPS, explain how the technology was infused, and provide lessons learned for using and integrating the various technologies while developing and operating the system, and finally outline plans moving forward (maintenance and operations decisions) based on the experience to date.
Using virtual environment for autonomous vehicle algorithm validation
NASA Astrophysics Data System (ADS)
Levinskis, Aleksandrs
2018-04-01
This paper describes possible use of modern game engine for validating and proving the concept of algorithm design. As the result simple visual odometry algorithm will be provided to show the concept and go over all workflow stages. Some of stages will involve using of Kalman filter in such a way that it will estimate optical flow velocity as well as position of moving camera located at vehicle body. In particular Unreal Engine 4 game engine will be used for generating optical flow patterns and ground truth path. For optical flow determination Horn and Schunck method will be applied. As the result, it will be shown that such method can estimate position of the camera attached to vehicle with certain displacement error respect to ground truth depending on optical flow pattern. For displacement rate RMS error is calculating between estimated and actual position.
Abstracted Workow Framework with a Structure from Motion Application
NASA Astrophysics Data System (ADS)
Rossi, Adam J.
In scientific and engineering disciplines, from academia to industry, there is an increasing need for the development of custom software to perform experiments, construct systems, and develop products. The natural mindset initially is to shortcut and bypass all overhead and process rigor in order to obtain an immediate result for the problem at hand, with the misconception that the software will simply be thrown away at the end. In a majority of the cases, it turns out the software persists for many years, and likely ends up in production systems for which it was not initially intended. In the current study, a framework that can be used in both industry and academic applications mitigates underlying problems associated with developing scientific and engineering software. This results in software that is much more maintainable, documented, and usable by others, specifically allowing new users to extend capabilities of components already implemented in the framework. There is a multi-disciplinary need in the fields of imaging science, computer science, and software engineering for a unified implementation model, which motivates the development of an abstracted software framework. Structure from motion (SfM) has been identified as one use case where the abstracted workflow framework can improve research efficiencies and eliminate implementation redundancies in scientific fields. The SfM process begins by obtaining 2D images of a scene from different perspectives. Features from the images are extracted and correspondences are established. This provides a sufficient amount of information to initialize the problem for fully automated processing. Transformations are established between views, and 3D points are established via triangulation algorithms. The parameters for the camera models for all views / images are solved through bundle adjustment, establishing a highly consistent point cloud. The initial sparse point cloud and camera matrices are used to generate a dense point cloud through patch based techniques or densification algorithms such as Semi-Global Matching (SGM). The point cloud can be visualized or exploited by both humans and automated techniques. In some cases the point cloud is "draped" with original imagery in order to enhance the 3D model for a human viewer. The SfM workflow can be implemented in the abstracted framework, making it easily leverageable and extensible by multiple users. Like many processes in scientific and engineering domains, the workflow described for SfM is complex and requires many disparate components to form a functional system, often utilizing algorithms implemented by many users in different languages / environments and without knowledge of how the component fits into the larger system. In practice, this generally leads to issues interfacing the components, building the software for desired platforms, understanding its concept of operations, and how it can be manipulated in order to fit the desired function for a particular application. In addition, other scientists and engineers instinctively wish to analyze the performance of the system, establish new algorithms, optimize existing processes, and establish new functionality based on current research. This requires a framework whereby new components can be easily plugged in without affecting the current implemented functionality. The need for a universal programming environment establishes the motivation for the development of the abstracted workflow framework. This software implementation, named Catena, provides base classes from which new components must derive in order to operate within the framework. The derivation mandates requirements be satisfied in order to provide a complete implementation. Additionally, the developer must provide documentation of the component in terms of its overall function and inputs. The interface input and output values corresponding to the component must be defined in terms of their respective data types, and the implementation uses mechanisms within the framework to retrieve and send the values. This process requires the developer to componentize their algorithm rather than implement it monolithically. Although the requirements of the developer are slightly greater, the benefits realized from using Catena far outweigh the overhead, and results in extensible software. This thesis provides a basis for the abstracted workflow framework concept and the Catena software implementation. The benefits are also illustrated using a detailed examination of the SfM process as an example application.
Mobile Technologies for the Surgical Pathologist.
Hartman, Douglas J
2015-06-01
Recent advances in hardware and computing power contained within mobile devices have made it possible to use these devices to improve and enhance pathologist workflow. This article discusses the possible uses ranging from basic functions to intermediate functions to advanced functions. Barriers to implementation are also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
NCI's Proteome Characterization Centers Announced | Office of Cancer Clinical Proteomics Research
The National Cancer Institute (NCI), part of the National Institutes of Health, announces the launch of a Clinical Proteomic Tumor Analysis Consortium (CPTAC). CPTAC is a comprehensive, coordinated team effort to accelerate the understanding of the molecular basis of cancer through the application of robust, quantitative, proteomic technologies and workflows.
ERIC Educational Resources Information Center
Ward, David; Hahn, Jim; Mestre, Lori S.
2015-01-01
To explore how libraries might integrate student perspectives and needs into their mobile development workflow, one large academic research library developed a fun, collaborative design methodology in order to stimulate student creativity. As part of a national IMLS (Institute for Museums and Library Services) grant, "The Student/Library…
The Artist and Architect: Creativity and Innovation through Role-Based Design
ERIC Educational Resources Information Center
Miller, Charles; Hokanson, Brad
2009-01-01
This article is the second installment in a four-part "Educational Technology" series exploring a contemporary perspective on the process of instructional design. In this article, the authors reintroduce the framework of Role-Based Design (RBD) and describe practical strategies for its integration into design workflow. Next, they examine their…
Teaching Information Security with Workflow Technology--A Case Study Approach
ERIC Educational Resources Information Center
He, Wu; Kshirsagar, Ashish; Nwala, Alexander; Li, Yaohang
2014-01-01
In recent years, there has been a significant increase in the demand from professionals in different areas for improving the curricula regarding information security. The use of authentic case studies in teaching information security offers the potential to effectively engage students in active learning. In this paper, the authors introduce the…
BP-Broker use-cases in the UncertWeb framework
NASA Astrophysics Data System (ADS)
Roncella, Roberto; Bigagli, Lorenzo; Schulz, Michael; Stasch, Christoph; Proß, Benjamin; Jones, Richard; Santoro, Mattia
2013-04-01
The UncertWeb framework is a distributed, Web-based Information and Communication Technology (ICT) system to support scientific data modeling in presence of uncertainty. We designed and prototyped a core component of the UncertWeb framework: the Business Process Broker. The BP-Broker implements several functionalities, such as: discovery of available processes/BPs, preprocessing of a BP into its executable form (EBP), publication of EBPs and their execution through a workflow-engine. According to the Composition-as-a-Service (CaaS) approach, the BP-Broker supports discovery and chaining of modeling resources (and processing resources in general), providing the necessary interoperability services for creating, validating, editing, storing, publishing, and executing scientific workflows. The UncertWeb project targeted several scenarios, which were used to evaluate and test the BP-Broker. The scenarios cover the following environmental application domains: biodiversity and habitat change, land use and policy modeling, local air quality forecasting, and individual activity in the environment. This work reports on the study of a number of use-cases, by means of the BP-Broker, namely: - eHabitat use-case: implements a Monte Carlo simulation performed on a deterministic ecological model; an extended use-case supports inter-comparison of model outputs; - FERA use-case: is composed of a set of models for predicting land-use and crop yield response to climatic and economic change; - NILU use-case: is composed of a Probabilistic Air Quality Forecasting model for predicting concentrations of air pollutants; - Albatross use-case: includes two model services for simulating activity-travel patterns of individuals in time and space; - Overlay use-case: integrates the NILU scenario with the Albatross scenario to calculate the exposure to air pollutants of individuals. Our aim was to prove the feasibility of describing composite modeling processes with a high-level, abstract notation (i.e. BPMN 2.0), and delegating the resolution of technical issues (e.g. I/O matching) as much as possible to an external service. The results of the experimented solution indicate that this approach facilitates the integration of environmental model workflows into the standard geospatial Web Services framework (e.g. the GEOSS Common Infrastructure), mitigating its inherent complexity. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.
NASA Astrophysics Data System (ADS)
Lau, Katherine; Isabelle, Martin; Lloyd, Gavin R.; Old, Oliver; Shepherd, Neil; Bell, Ian M.; Dorney, Jennifer; Lewis, Aaran; Gaifulina, Riana; Rodriguez-Justo, Manuel; Kendall, Catherine; Stone, Nicolas; Thomas, Geraint; Reece, David
2016-03-01
Despite the demonstrated potential as an accurate cancer diagnostic tool, Raman spectroscopy (RS) is yet to be adopted by the clinic for histopathology reviews. The Stratified Medicine through Advanced Raman Technologies (SMART) consortium has begun to address some of the hurdles in its adoption for cancer diagnosis. These hurdles include awareness and acceptance of the technology, practicality of integration into the histopathology workflow, data reproducibility and availability of transferrable models. We have formed a consortium, in joint efforts, to develop optimised protocols for tissue sample preparation, data collection and analysis. These protocols will be supported by provision of suitable hardware and software tools to allow statistically sound classification models to be built and transferred for use on different systems. In addition, we are building a validated gastrointestinal (GI) cancers model, which can be trialled as part of the histopathology workflow at hospitals, and a classification tool. At the end of the project, we aim to deliver a robust Raman based diagnostic platform to enable clinical researchers to stage cancer, define tumour margin, build cancer diagnostic models and discover novel disease bio markers.
An approach for software-driven and standard-based support of cross-enterprise tumor boards.
Mangesius, Patrick; Fischer, Bernd; Schabetsberger, Thomas
2015-01-01
For tumor boards, the networking of different medical disciplines' expertise continues to gain importance. However, interdisciplinary tumor boards spread across several institutions are rarely supported by information technology tools today. The aim of this paper is to point out an approach for a tumor board management system prototype. For analyzing the requirements, an incremental process was used. The requirements were surveyed using Informal Conversational Interview and documented with Use Case Diagrams defined by the Unified Modeling Language (UML). Analyses of current EHR standards were conducted to evaluate technical requirements. Functional and technical requirements of clinical conference applications were evaluated and documented. In several steps, workflows were derived and application mockups were created. Although there is a vast amount of common understanding concerning how clinical conferences should be conducted and how their workflows should be structured, these are hardly standardized, neither on a functional nor on a technical level. This results in drawbacks for participants and patients. Using modern EHR technologies based on profiles such as IHE Cross Enterprise document sharing (XDS), these deficits could be overcome.
From days to hours: reporting clinically actionable variants from whole genome sequencing.
Middha, Sumit; Baheti, Saurabh; Hart, Steven N; Kocher, Jean-Pierre A
2014-01-01
As the cost of whole genome sequencing (WGS) decreases, clinical laboratories will be looking at broadly adopting this technology to screen for variants of clinical significance. To fully leverage this technology in a clinical setting, results need to be reported quickly, as the turnaround rate could potentially impact patient care. The latest sequencers can sequence a whole human genome in about 24 hours. However, depending on the computing infrastructure available, the processing of data can take several days, with the majority of computing time devoted to aligning reads to genomics regions that are to date not clinically interpretable. In an attempt to accelerate the reporting of clinically actionable variants, we have investigated the utility of a multi-step alignment algorithm focused on aligning reads and calling variants in genomic regions of clinical relevance prior to processing the remaining reads on the whole genome. This iterative workflow significantly accelerates the reporting of clinically actionable variants with no loss of accuracy when compared to genotypes obtained with the OMNI SNP platform or to variants detected with a standard workflow that combines Novoalign and GATK.
DICOMweb™: Background and Application of the Web Standard for Medical Imaging.
Genereaux, Brad W; Dennison, Donald K; Ho, Kinson; Horn, Robert; Silver, Elliot Lewis; O'Donnell, Kevin; Kahn, Charles E
2018-05-10
This paper describes why and how DICOM, the standard that has been the basis for medical imaging interoperability around the world for several decades, has been extended into a full web technology-based standard, DICOMweb. At the turn of the century, healthcare embraced information technology, which created new problems and new opportunities for the medical imaging industry; at the same time, web technologies matured and began serving other domains well. This paper describes DICOMweb, how it extended the DICOM standard, and how DICOMweb can be applied to problems facing healthcare applications to address workflow and the changing healthcare climate.
Diabetes Information Technology: Designing Informatics Systems to Catalyze Change in Clinical Care
Lester, William T.; Zai, Adrian H.; Chueh, Henry C.; Grant, Richard W.
2008-01-01
Current computerized reminder and decision support systems intended to improve diabetes care have had a limited effect on clinical outcomes. Increasing pressures on health care networks to meet standards of diabetes care have created an environment where information technology systems for diabetes management are often created under duress, appended to existing clinical systems, and poorly integrated into the existing workflow. After defining the components of diabetes disease management, the authors present an eight-step conceptual framework to guide the development of more effective diabetes information technology systems for translating clinical information into clinical action. PMID:19885355
Quality data collection and management technology of aerospace complex product assembly process
NASA Astrophysics Data System (ADS)
Weng, Gang; Liu, Jianhua; He, Yongxi; Zhuang, Cunbo
2017-04-01
Aiming at solving problems of difficult management and poor traceability for discrete assembly process quality data, a data collection and management method is proposed which take the assembly process and BOM as the core. Data collection method base on workflow technology, data model base on BOM and quality traceability of assembly process is included in the method. Finally, assembly process quality data management system is developed and effective control and management of quality information for complex product assembly process is realized.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Burgarella, Sarah; Cattaneo, Dario; Pinciroli, Francesco; Masseroli, Marco
2005-12-01
Improvements of bio-nano-technologies and biomolecular techniques have led to increasing production of high-throughput experimental data. Spotted cDNA microarray is one of the most diffuse technologies, used in single research laboratories and in biotechnology service facilities. Although they are routinely performed, spotted microarray experiments are complex procedures entailing several experimental steps and actors with different technical skills and roles. During an experiment, involved actors, who can also be located in a distance, need to access and share specific experiment information according to their roles. Furthermore, complete information describing all experimental steps must be orderly collected to allow subsequent correct interpretation of experimental results. We developed MicroGen, a web system for managing information and workflow in the production pipeline of spotted microarray experiments. It is constituted of a core multi-database system able to store all data completely characterizing different spotted microarray experiments according to the Minimum Information About Microarray Experiments (MIAME) standard, and of an intuitive and user-friendly web interface able to support the collaborative work required among multidisciplinary actors and roles involved in spotted microarray experiment production. MicroGen supports six types of user roles: the researcher who designs and requests the experiment, the spotting operator, the hybridisation operator, the image processing operator, the system administrator, and the generic public user who can access the unrestricted part of the system to get information about MicroGen services. MicroGen represents a MIAME compliant information system that enables managing workflow and supporting collaborative work in spotted microarray experiment production.
Coiera, Enrico
2014-01-01
Background and objective Annotations to physical workspaces such as signs and notes are ubiquitous. When densely annotated, work areas become communication spaces. This study aims to characterize the types and purpose of such annotations. Methods A qualitative observational study was undertaken in two wards and the radiology department of a 440-bed metropolitan teaching hospital. Images were purposefully sampled; 39 were analyzed after excluding inferior images. Results Annotation functions included signaling identity, location, capability, status, availability, and operation. They encoded data, rules or procedural descriptions. Most aggregated into groups that either created a workflow by referencing each other, supported a common workflow without reference to each other, or were heterogeneous, referring to many workflows. Higher-level assemblies of such groupings were also observed. Discussion Annotations make visible the gap between work done and the capability of a space to support work. Annotations are repairs of an environment, improving fitness for purpose, fixing inadequacy in design, or meeting emergent needs. Annotations thus record the missing information needed to undertake tasks, typically added post-implemented. Measuring annotation levels post-implementation could help assess the fit of technology to task. Physical and digital spaces could meet broader user needs by formally supporting user customization, ‘programming through annotation’. Augmented reality systems could also directly support annotation, addressing existing information gaps, and enhancing work with context sensitive annotation. Conclusions Communication spaces offer a model of how work unfolds. Annotations make visible local adaptation that makes technology fit for purpose post-implementation and suggest an important role for annotatable information systems and digital augmentation of the physical environment. PMID:24005797
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Sivaramakrishnan, Chandrika; Critchlow, Terence J.
2011-07-04
A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adaptedmore » by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.« less
A systems approach to purchasing and implementing new technology.
Kinney, Kristin
2007-01-01
There are many risks associated with purchasing and implementing new technology. The challenge is to reduce these risks to gain maximum benefit from the new technology purchased. In the high-cost world of healthcare, there is little room for wasting time and money related to technology implementation. If the workflow of the current system is not adequately understood before purchasing and implementing a new technology, that technology may mask underlying problems and/or create new ones. The purpose of this article is to present a framework for analyzing the current process or system, identifying the strengths and the areas in need of improvement. A thorough understanding of the problems with the current system and the barriers to change is necessary to ensure the selection of the "right" technology and promote its implementation.
Good pharmacovigilance practices: technology enabled.
Nelson, Robert C; Palsulich, Bruce; Gogolak, Victor
2002-01-01
The assessment of spontaneous reports is most effective it is conducted within a defined and rigorous process. The framework for good pharmacovigilance process (GPVP) is proposed as a subset of good postmarketing surveillance process (GPMSP), a functional structure for both a public health and corporate risk management strategy. GPVP has good practices that implement each step within a defined process. These practices are designed to efficiently and effectively detect and alert the drug safety professional to new and potentially important information on drug-associated adverse reactions. These practices are enabled by applied technology designed specifically for the review and assessment of spontaneous reports. Specific practices include rules-based triage, active query prompts for severe organ insults, contextual single case evaluation, statistical proportionality and correlational checks, case-series analyses, and templates for signal work-up and interpretation. These practices and the overall GPVP are supported by state-of-the-art web-based systems with powerful analytical engines, workflow and audit trials to allow validated systems support for valid drug safety signalling efforts. It is also important to understand that a process has a defined set of steps and any one cannot stand independently. Specifically, advanced use of technical alerting methods in isolation can mislead and allow one to misunderstand priorities and relative value. In the end, pharmacovigilance is a clinical art and a component process to the science of pharmacoepidemiology and risk management.
Optimization of business processes in banks through flexible workflow
NASA Astrophysics Data System (ADS)
Postolache, V.
2017-08-01
This article describes an integrated business model of a commercial bank. There are examples of components that go into its composition: wooden models and business processes, strategic goals, organizational structure, system architecture, operational and marketing risk models, etc. The practice has shown that the development and implementation of the integrated business model of the bank significantly increase operating efficiency and its management, ensures organizational and technology stable development. Considering the evolution of business processes in the banking sector, should be analysed their common characteristics. From the author’s point of view, a business process is a set of various activities of a commercial bank in which “Input” is one or more financial and material resources, as a result of this activity and “output” is created by banking product, which is some value to consumer. Using workflow technology, management business process efficiency issue is a matter of managing the integration of resources and sequence of actions aimed at achieving this goal. In turn, it implies management of jobs or functions’ interaction, synchronizing of the assignments periods, reducing delays in the transmission of the results etc. Workflow technology is very important for managers at all levels, as they can use it to easily strengthen the control over what is happening in a particular unit, and in the bank as a whole. The manager is able to plan, to implement rules, to interact within the framework of the company’s procedures and tasks entrusted to the system of the distribution function and execution control, alert on the implementation and issuance of the statistical data on the effectiveness of operating procedures. Development and active use of the integrated bank business model is one of the key success factors that contribute to long-term and stable development of the bank, increase employee efficiency and business processes, implement the strategic objectives.
Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.
Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa
2012-05-04
Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.
Clinical Note Creation, Binning, and Artificial Intelligence.
Deliberato, Rodrigo Octávio; Celi, Leo Anthony; Stone, David J
2017-08-03
The creation of medical notes in software applications poses an intrinsic problem in workflow as the technology inherently intervenes in the processes of collecting and assembling information, as well as the production of a data-driven note that meets both individual and healthcare system requirements. In addition, the note writing applications in currently available electronic health records (EHRs) do not function to support decision making to any substantial degree. We suggest that artificial intelligence (AI) could be utilized to facilitate the workflows of the data collection and assembly processes, as well as to support the development of personalized, yet data-driven assessments and plans. ©Rodrigo Octávio Deliberato, Leo Anthony Celi, David J Stone. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 03.08.2017.
Xu, Leilei; Wang, Fang; Xu, Ying; Wang, Yi; Zhang, Cuiping; Qin, Xue; Yu, Hongxiu; Yang, Pengyuan
2015-12-07
As a key post-translational modification mechanism, protein acetylation plays critical roles in regulating and/or coordinating cell metabolism. Acetylation is a prevalent modification process in enzymes. Protein acetylation modification occurs in sub-stoichiometric amounts; therefore extracting biologically meaningful information from these acetylation sites requires an adaptable, sensitive, specific, and robust method for their quantification. In this work, we combine immunoassays and multiple reaction monitoring-mass spectrometry (MRM-MS) technology to develop an absolute quantification for acetylation modification. With this hybrid method, we quantified the acetylation level of metabolic enzymes, which could demonstrate the regulatory mechanisms of the studied enzymes. The development of this quantitative workflow is a pivotal step for advancing our knowledge and understanding of the regulatory effects of protein acetylation in physiology and pathophysiology.
The Methodology for Developing Mobile Agent Application for Ubiquitous Environment
NASA Astrophysics Data System (ADS)
Matsuzaki, Kazutaka; Yoshioka, Nobukazu; Honiden, Shinichi
A methodology which enables a flexible and reusable development of mobile agent application to a mobility aware indoor environment is provided in this study. The methodology is named Workflow-awareness model based on a concept of a pair of mobile agents cooperating to perform a given task. A monolithic mobile agent application with numerous concerns in a mobility aware setting is divided into a master agent (MA) and a shadow agent (SA) according to a type of tasks. The MA executes a main application logic which includes monitoring a user's physical movement and coordinating various services. The SA performs additional tasks depending on environments to aid the MA in achieving efficient execution without losing application logic. "Workflow-awareness (WFA)" means that the SA knows the MA's execution state transition so that the SA can provide a proper task at a proper timing. A prototype implementation of the methodology is done with a practical use of AspectJ. AspectJ is used to automate WFA by weaving communication modules to both MA and SA. Usefulness of this methodology concerning its efficiency and software engineering aspects are analyzed. As for the effectiveness, the overhead of WFA is relatively small to the whole expenditure time. And from the view of the software engineering, WFA is possible to provide a mechanism to deploy one application in various situations.
Joda, Tim; Brägger, Urs
2015-01-01
To compare time-efficiency in the production of implant crowns using a digital workflow versus the conventional pathway. This prospective clinical study used a crossover design that included 20 study participants receiving single-tooth replacements in posterior sites. Each patient received a customized titanium abutment plus a computer-aided design/computer-assisted manufacture (CAD/CAM) zirconia suprastructure (for those in the test group, using digital workflow) and a standardized titanium abutment plus a porcelain-fused-to-metal crown (for those in the control group, using a conventional pathway). The start of the implant prosthetic treatment was established as the baseline. Time-efficiency analysis was defined as the primary outcome, and was measured for every single clinical and laboratory work step in minutes. Statistical analysis was calculated with the Wilcoxon rank sum test. All crowns could be provided within two clinical appointments, independent of the manufacturing process. The mean total production time, as the sum of clinical plus laboratory work steps, was significantly different. The mean ± standard deviation (SD) time was 185.4 ± 17.9 minutes for the digital workflow process and 223.0 ± 26.2 minutes for the conventional pathway (P = .0001). Therefore, digital processing for overall treatment was 16% faster. Detailed analysis for the clinical treatment revealed a significantly reduced mean ± SD chair time of 27.3 ± 3.4 minutes for the test group compared with 33.2 ± 4.9 minutes for the control group (P = .0001). Similar results were found for the mean laboratory work time, with a significant decrease of 158.1 ± 17.2 minutes for the test group vs 189.8 ± 25.3 minutes for the control group (P = .0001). Only a few studies have investigated efficiency parameters of digital workflows compared with conventional pathways in implant dental medicine. This investigation shows that the digital workflow seems to be more time-efficient than the established conventional production pathway for fixed implant-supported crowns. Both clinical chair time and laboratory manufacturing steps could be effectively shortened with the digital process of intraoral scanning plus CAD/CAM technology.