Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
An end-to-end workflow for engineering of biological networks from high-level specifications.
Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun
2012-08-17
We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.
Coupling between a multi-physics workflow engine and an optimization framework
NASA Astrophysics Data System (ADS)
Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.
2016-03-01
A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.
Using AI and Semantic Web Technologies to attack Process Complexity in Open Systems
NASA Astrophysics Data System (ADS)
Thompson, Simon; Giles, Nick; Li, Yang; Gharib, Hamid; Nguyen, Thuc Duong
Recently many vendors and groups have advocated using BPEL and WS-BPEL as a workflow language to encapsulate business logic. While encapsulating workflow and process logic in one place is a sensible architectural decision the implementation of complex workflows suffers from the same problems that made managing and maintaining hierarchical procedural programs difficult. BPEL lacks constructs for logical modularity such as the requirements construct from the STL [12] or the ability to adapt constructs like pure abstract classes for the same purpose. We describe a system that uses semantic web and agent concepts to implement an abstraction layer for BPEL based on the notion of Goals and service typing. AI planning was used to enable process engineers to create and validate systems that used services and goals as first class concepts and compiled processes at run time for execution.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
NASA Astrophysics Data System (ADS)
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow.The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Haston, Elspeth; Cubey, Robert; Pullan, Martin; Atkins, Hannah; Harris, David J
2012-01-01
Abstract Digitisation programmes in many institutes frequently involve disparate and irregular funding, diverse selection criteria and scope, with different members of staff managing and operating the processes. These factors have influenced the decision at the Royal Botanic Garden Edinburgh to develop an integrated workflow for the digitisation of herbarium specimens which is modular and scalable to enable a single overall workflow to be used for all digitisation projects. This integrated workflow is comprised of three principal elements: a specimen workflow, a data workflow and an image workflow. The specimen workflow is strongly linked to curatorial processes which will impact on the prioritisation, selection and preparation of the specimens. The importance of including a conservation element within the digitisation workflow is highlighted. The data workflow includes the concept of three main categories of collection data: label data, curatorial data and supplementary data. It is shown that each category of data has its own properties which influence the timing of data capture within the workflow. Development of software has been carried out for the rapid capture of curatorial data, and optical character recognition (OCR) software is being used to increase the efficiency of capturing label data and supplementary data. The large number and size of the images has necessitated the inclusion of automated systems within the image workflow. PMID:22859881
Nexus: A modular workflow management system for quantum simulation codes
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.
2016-01-01
The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.
TAMU: Blueprint for A New Space Mission Operations System Paradigm
NASA Technical Reports Server (NTRS)
Ruszkowski, James T.; Meshkat, Leila; Haensly, Jean; Pennington, Al; Hogle, Charles
2011-01-01
The Transferable, Adaptable, Modular and Upgradeable (TAMU) Flight Production Process (FPP) is a System of System (SOS) framework which cuts across multiple organizations and their associated facilities, that are, in the most general case, in geographically disperse locations, to develop the architecture and associated workflow processes of products for a broad range of flight projects. Further, TAMU FPP provides for the automatic execution and re-planning of the workflow processes as they become operational. This paper provides the blueprint for the TAMU FPP paradigm. This blueprint presents a complete, coherent technique, process and tool set that results in an infrastructure that can be used for full lifecycle design and decision making during the flight production process. Based on the many years of experience with the Space Shuttle Program (SSP) and the International Space Station (ISS), the currently cancelled Constellation Program which aimed on returning humans to the moon as a starting point, has been building a modern model-based Systems Engineering infrastructure to Re-engineer the FPP. This infrastructure uses a structured modeling and architecture development approach to optimize the system design thereby reducing the sustaining costs and increasing system efficiency, reliability, robustness and maintainability metrics. With the advent of the new vision for human space exploration, it is now necessary to further generalize this framework to take into consideration a broad range of missions and the participation of multiple organizations outside of the MOD; hence the Transferable, Adaptable, Modular and Upgradeable (TAMU) concept.
Nexus: a modular workflow management system for quantum simulation codes
Krogel, Jaron T.
2015-08-24
The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less
Conventions and workflows for using Situs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wriggers, Willy, E-mail: wriggers@biomachina.org
2012-04-01
Recent developments of the Situs software suite for multi-scale modeling are reviewed. Typical workflows and conventions encountered during processing of biophysical data from electron microscopy, tomography or small-angle X-ray scattering are described. Situs is a modular program package for the multi-scale modeling of atomic resolution structures and low-resolution biophysical data from electron microscopy, tomography or small-angle X-ray scattering. This article provides an overview of recent developments in the Situs package, with an emphasis on workflows and conventions that are important for practical applications. The modular design of the programs facilitates scripting in the bash shell that allows specific programs tomore » be combined in creative ways that go beyond the original intent of the developers. Several scripting-enabled functionalities, such as flexible transformations of data type, the use of symmetry constraints or the creation of two-dimensional projection images, are described. The processing of low-resolution biophysical maps in such workflows follows not only first principles but often relies on implicit conventions. Situs conventions related to map formats, resolution, correlation functions and feature detection are reviewed and summarized. The compatibility of the Situs workflow with CCP4 conventions and programs is discussed.« less
NASA Astrophysics Data System (ADS)
Patra, A. K.; Valentine, G. A.; Bursik, M. I.; Connor, C.; Connor, L.; Jones, M.; Simakov, N.; Aghakhani, H.; Jones-Ivey, R.; Kosar, T.; Zhang, B.
2015-12-01
Over the last 5 years we have created a community collaboratory Vhub.org [Palma et al, J. App. Volc. 3:2 doi:10.1186/2191-5040-3-2] as a place to find volcanology-related resources, and a venue for users to disseminate tools, teaching resources, data, and an online platform to support collaborative efforts. As the community (current active users > 6000 from an estimated community of comparable size) embeds the tools in the collaboratory into educational and research workflows it became imperative to: a) redesign tools into robust, open source reusable software for online and offline usage/enhancement; b) share large datasets with remote collaborators and other users seamlessly with security; c) support complex workflows for uncertainty analysis, validation and verification and data assimilation with large data. The focus on tool development/redevelopment has been twofold - firstly to use best practices in software engineering and new hardware like multi-core and graphic processing units. Secondly we wish to enhance capabilities to support inverse modeling, uncertainty quantification using large ensembles and design of experiments, calibration, validation. Among software engineering practices we practice are open source facilitating community contributions, modularity and reusability. Our initial targets are four popular tools on Vhub - TITAN2D, TEPHRA2, PUFF and LAVA. Use of tools like these requires many observation driven data sets e.g. digital elevation models of topography, satellite imagery, field observations on deposits etc. These data are often maintained in private repositories that are privately shared by "sneaker-net". As a partial solution to this we tested mechanisms using irods software for online sharing of private data with public metadata and access limits. Finally, we adapted use of workflow engines (e.g. Pegasus) to support the complex data and computing workflows needed for usage like uncertainty quantification for hazard analysis using physical models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhyankar, Vinay V.; Wu, Meiye; Koh, Chung -Yan
Microfluidic barrier tissue models have emerged as advanced in vitro tools to explore interactions with external stimuli such as drug candidates, pathogens, or toxins. However, the procedures required to establish and maintain these systems can be challenging to implement for end users, particularly those without significant in-house engineering expertise. Here we present a module-based approach that provides an easy-to-use workflow to establish, maintain, and analyze microscale tissue constructs. Our approach begins with a removable culture insert that is magnetically coupled, decoupled, and transferred between standalone, prefabricated microfluidic modules for simplified cell seeding, culture, and downstream analysis. The modular approach allowsmore » several options for perfusion including standard syringe pumps or integration with a self-contained gravity-fed module for simple cell maintenance. As proof of concept, we establish a culture of primary human microvascular endothelial cells (HMVEC) and report combined surface protein imaging and gene expression after controlled apical stimulation with the bacterial endotoxin lipopolysaccharide (LPS). We also demonstrate the feasibility of incorporating hydrated biomaterial interfaces into the microfluidic architecture by integrating an ultra-thin (< 1 μm), self-assembled hyaluronic acid/peptide amphiphile culture membrane with brain-specific Young’s modulus (~ 1kPa). To highlight the importance of including biomimetic interfaces into microscale models we report multi-tiered readouts from primary rat cortical cells cultured on the self-assembled membrane and compare a panel of mRNA targets with primary brain tissue signatures. As a result, we anticipate that the modular approach and simplified operational workflows presented here will enable a wide range of research groups to incorporate microfluidic barrier tissue models into their work.« less
Abhyankar, Vinay V.; Wu, Meiye; Koh, Chung -Yan; ...
2016-05-26
Microfluidic barrier tissue models have emerged as advanced in vitro tools to explore interactions with external stimuli such as drug candidates, pathogens, or toxins. However, the procedures required to establish and maintain these systems can be challenging to implement for end users, particularly those without significant in-house engineering expertise. Here we present a module-based approach that provides an easy-to-use workflow to establish, maintain, and analyze microscale tissue constructs. Our approach begins with a removable culture insert that is magnetically coupled, decoupled, and transferred between standalone, prefabricated microfluidic modules for simplified cell seeding, culture, and downstream analysis. The modular approach allowsmore » several options for perfusion including standard syringe pumps or integration with a self-contained gravity-fed module for simple cell maintenance. As proof of concept, we establish a culture of primary human microvascular endothelial cells (HMVEC) and report combined surface protein imaging and gene expression after controlled apical stimulation with the bacterial endotoxin lipopolysaccharide (LPS). We also demonstrate the feasibility of incorporating hydrated biomaterial interfaces into the microfluidic architecture by integrating an ultra-thin (< 1 μm), self-assembled hyaluronic acid/peptide amphiphile culture membrane with brain-specific Young’s modulus (~ 1kPa). To highlight the importance of including biomimetic interfaces into microscale models we report multi-tiered readouts from primary rat cortical cells cultured on the self-assembled membrane and compare a panel of mRNA targets with primary brain tissue signatures. As a result, we anticipate that the modular approach and simplified operational workflows presented here will enable a wide range of research groups to incorporate microfluidic barrier tissue models into their work.« less
Lott, Steffen C; Wolfien, Markus; Riege, Konstantin; Bagnacani, Andrea; Wolkenhauer, Olaf; Hoffmann, Steve; Hess, Wolfgang R
2017-11-10
RNA-Sequencing (RNA-Seq) has become a widely used approach to study quantitative and qualitative aspects of transcriptome data. The variety of RNA-Seq protocols, experimental study designs and the characteristic properties of the organisms under investigation greatly affect downstream and comparative analyses. In this review, we aim to explain the impact of structured pre-selection, classification and integration of best-performing tools within modularized data analysis workflows and ready-to-use computing infrastructures towards experimental data analyses. We highlight examples for workflows and use cases that are presented for pro-, eukaryotic and mixed dual RNA-Seq (meta-transcriptomics) experiments. In addition, we are summarizing the expertise of the laboratories participating in the project consortium "Structured Analysis and Integration of RNA-Seq experiments" (de.STAIR) and its integration with the Galaxy-workbench of the RNA Bioinformatics Center (RBC). Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
MassCascade: Visual Programming for LC-MS Data Processing in Metabolomics.
Beisken, Stephan; Earll, Mark; Portwood, David; Seymour, Mark; Steinbeck, Christoph
2014-04-01
Liquid chromatography coupled to mass spectrometry (LC-MS) is commonly applied to investigate the small molecule complement of organisms. Several software tools are typically joined in custom pipelines to semi-automatically process and analyse the resulting data. General workflow environments like the Konstanz Information Miner (KNIME) offer the potential of an all-in-one solution to process LC-MS data by allowing easy integration of different tools and scripts. We describe MassCascade and its workflow plug-in for processing LC-MS data. The Java library integrates frequently used algorithms in a modular fashion, thus enabling it to serve as back-end for graphical front-ends. The functions available in MassCascade have been encapsulated in a plug-in for the workflow environment KNIME, allowing combined use with e.g. statistical workflow nodes from other providers and making the tool intuitive to use without knowledge of programming. The design of the software guarantees a high level of modularity where processing functions can be quickly replaced or concatenated. MassCascade is an open-source library for LC-MS data processing in metabolomics. It embraces the concept of visual programming through its KNIME plug-in, simplifying the process of building complex workflows. The library was validated using open data.
Adaptive Façade: Variant-Finding using Shape Grammar
NASA Astrophysics Data System (ADS)
Tomasowa, Riva; Utama Sjarifudin, Firza
2017-12-01
Modular façade construction has never been better since the birth of computer-aided manufacturing which bridges the modeling phase into the manufacturing phase for escalating the mass production. This comes to a result that the identity of a product or a building façade will commonly generate in the same way that the initial design was intended to. Rectifying the early model will then greatly impact the process later. The aim of this paper is to propose a way to solve these two challenges, without risking the manufacturing process, but more to explore the potential designs. Shape grammar is used to conceive more designs in the early stage, derived from the initial product - the modular adaptive façade system. The derivations are then tested through simulation to state the efficacy of the models. We find that the workflow somehow contributes to the better design and engineering process as well as the solution allows diversification in the façade expressions.
DEWEY: the DICOM-enabled workflow engine system.
Erickson, Bradley J; Langer, Steve G; Blezek, Daniel J; Ryan, William J; French, Todd L
2014-06-01
Workflow is a widely used term to describe the sequence of steps to accomplish a task. The use of workflow technology in medicine and medical imaging in particular is limited. In this article, we describe the application of a workflow engine to improve workflow in a radiology department. We implemented a DICOM-enabled workflow engine system in our department. We designed it in a way to allow for scalability, reliability, and flexibility. We implemented several workflows, including one that replaced an existing manual workflow and measured the number of examinations prepared in time without and with the workflow system. The system significantly increased the number of examinations prepared in time for clinical review compared to human effort. It also met the design goals defined at its outset. Workflow engines appear to have value as ways to efficiently assure that complex workflows are completed in a timely fashion.
Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows
NASA Astrophysics Data System (ADS)
Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.
2014-12-01
The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
Rational Design of an Ultrasensitive Quorum-Sensing Switch.
Zeng, Weiqian; Du, Pei; Lou, Qiuli; Wu, Lili; Zhang, Haoqian M; Lou, Chunbo; Wang, Hongli; Ouyang, Qi
2017-08-18
One of the purposes of synthetic biology is to develop rational methods that accelerate the design of genetic circuits, saving time and effort spent on experiments and providing reliably predictable circuit performance. We applied a reverse engineering approach to design an ultrasensitive transcriptional quorum-sensing switch. We want to explore how systems biology can guide synthetic biology in the choice of specific DNA sequences and their regulatory relations to achieve a targeted function. The workflow comprises network enumeration that achieves the target function robustly, experimental restriction of the obtained candidate networks, global parameter optimization via mathematical analysis, selection and engineering of parts based on these calculations, and finally, circuit construction based on the principles of standardization and modularization. The performance of realized quorum-sensing switches was in good qualitative agreement with the computational predictions. This study provides practical principles for the rational design of genetic circuits with targeted functions.
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
Design and implementation of workflow engine for service-oriented architecture
NASA Astrophysics Data System (ADS)
Peng, Shuqing; Duan, Huining; Chen, Deyun
2009-04-01
As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.
SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog
NASA Astrophysics Data System (ADS)
Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely
2014-05-01
Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.
Inda, Márcia A; van Batenburg, Marinus F; Roos, Marco; Belloum, Adam S Z; Vasunin, Dmitry; Wibisono, Adianto; van Kampen, Antoine H C; Breit, Timo M
2008-08-08
Chromosome location is often used as a scaffold to organize genomic information in both the living cell and molecular biological research. Thus, ever-increasing amounts of data about genomic features are stored in public databases and can be readily visualized by genome browsers. To perform in silico experimentation conveniently with this genomics data, biologists need tools to process and compare datasets routinely and explore the obtained results interactively. The complexity of such experimentation requires these tools to be based on an e-Science approach, hence generic, modular, and reusable. A virtual laboratory environment with workflows, workflow management systems, and Grid computation are therefore essential. Here we apply an e-Science approach to develop SigWin-detector, a workflow-based tool that can detect significantly enriched windows of (genomic) features in a (DNA) sequence in a fast and reproducible way. For proof-of-principle, we utilize a biological use case to detect regions of increased and decreased gene expression (RIDGEs and anti-RIDGEs) in human transcriptome maps. We improved the original method for RIDGE detection by replacing the costly step of estimation by random sampling with a faster analytical formula for computing the distribution of the null hypothesis being tested and by developing a new algorithm for computing moving medians. SigWin-detector was developed using the WS-VLAM workflow management system and consists of several reusable modules that are linked together in a basic workflow. The configuration of this basic workflow can be adapted to satisfy the requirements of the specific in silico experiment. As we show with the results from analyses in the biological use case on RIDGEs, SigWin-detector is an efficient and reusable Grid-based tool for discovering windows enriched for features of a particular type in any sequence of values. Thus, SigWin-detector provides the proof-of-principle for the modular e-Science based concept of integrative bioinformatics experimentation.
Agile parallel bioinformatics workflow management using Pwrake.
Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro
2011-09-08
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.
Agile parallel bioinformatics workflow management using Pwrake
2011-01-01
Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows. PMID:21899774
BPELPower—A BPEL execution engine for geospatial web services
NASA Astrophysics Data System (ADS)
Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi
2012-10-01
The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.
Characterizing Strain Variation in Engineered E. coli Using a Multi-Omics-Based Workflow
Brunk, Elizabeth; George, Kevin W.; Alonso-Gutierrez, Jorge; ...
2016-05-19
Understanding the complex interactions that occur between heterologous and native biochemical pathways represents a major challenge in metabolic engineering and synthetic biology. We present a workflow that integrates metabolomics, proteomics, and genome-scale models of Escherichia coli metabolism to study the effects of introducing a heterologous pathway into a microbial host. This workflow incorporates complementary approaches from computational systems biology, metabolic engineering, and synthetic biology; provides molecular insight into how the host organism microenvironment changes due to pathway engineering; and demonstrates how biological mechanisms underlying strain variation can be exploited as an engineering strategy to increase product yield. As a proofmore » of concept, we present the analysis of eight engineered strains producing three biofuels: isopentenol, limonene, and bisabolene. Application of this workflow identified the roles of candidate genes, pathways, and biochemical reactions in observed experimental phenomena and facilitated the construction of a mutant strain with improved productivity. The contributed workflow is available as an open-source tool in the form of iPython notebooks.« less
Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz
2016-01-01
As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optimizations1 to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor a , an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions. PMID:27896971
Kaushik, Gaurav; Ivkovic, Sinisa; Simonovic, Janko; Tijanic, Nebojsa; Davis-Dusenbery, Brandi; Kural, Deniz
2017-01-01
As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Martinez, Clarisa; Wang, Jing; Liu, Ye; Liu, Brent
2014-03-01
Clinical trials usually have a demand to collect, track and analyze multimedia data according to the workflow. Currently, the clinical trial data management requirements are normally addressed with custom-built systems. Challenges occur in the workflow design within different trials. The traditional pre-defined custom-built system is usually limited to a specific clinical trial and normally requires time-consuming and resource-intensive software development. To provide a solution, we present a user customizable imaging informatics-based intelligent workflow engine system for managing stroke rehabilitation clinical trials with intelligent workflow. The intelligent workflow engine provides flexibility in building and tailoring the workflow in various stages of clinical trials. By providing a solution to tailor and automate the workflow, the system will save time and reduce errors for clinical trials. Although our system is designed for clinical trials for rehabilitation, it may be extended to other imaging based clinical trials as well.
Reconfigurable Software for Mission Operations
NASA Technical Reports Server (NTRS)
Trimble, Jay
2014-01-01
We developed software that provides flexibility to mission organizations through modularity and composability. Modularity enables removal and addition of functionality through the installation of plug-ins. Composability enables users to assemble software from pre-built reusable objects, thus reducing or eliminating the walls associated with traditional application architectures and enabling unique combinations of functionality. We have used composable objects to reduce display build time, create workflows, and build scenarios to test concepts for lunar roving operations. The software is open source, and may be downloaded from https:github.comnasamct.
A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines
2011-01-01
Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538
Handler, Michael; Schier, Peter P; Fritscher, Karl D; Raudaschl, Patrik; Johnson Chacko, Lejo; Glueckert, Rudolf; Saba, Rami; Schubert, Rainer; Baumgarten, Daniel; Baumgartner, Christian
2017-01-01
Our sense of balance and spatial orientation strongly depends on the correct functionality of our vestibular system. Vestibular dysfunction can lead to blurred vision and impaired balance and spatial orientation, causing a significant decrease in quality of life. Recent studies have shown that vestibular implants offer a possible treatment for patients with vestibular dysfunction. The close proximity of the vestibular nerve bundles, the facial nerve and the cochlear nerve poses a major challenge to targeted stimulation of the vestibular system. Modeling the electrical stimulation of the vestibular system allows for an efficient analysis of stimulation scenarios previous to time and cost intensive in vivo experiments. Current models are based on animal data or CAD models of human anatomy. In this work, a (semi-)automatic modular workflow is presented for the stepwise transformation of segmented vestibular anatomy data of human vestibular specimens to an electrical model and subsequently analyzed. The steps of this workflow include (i) the transformation of labeled datasets to a tetrahedra mesh, (ii) nerve fiber anisotropy and fiber computation as a basis for neuron models, (iii) inclusion of arbitrary electrode designs, (iv) simulation of quasistationary potential distributions, and (v) analysis of stimulus waveforms on the stimulation outcome. Results obtained by the workflow based on human datasets and the average shape of a statistical model revealed a high qualitative agreement and a quantitatively comparable range compared to data from literature, respectively. Based on our workflow, a detailed analysis of intra- and extra-labyrinthine electrode configurations with various stimulation waveforms and electrode designs can be performed on patient specific anatomy, making this framework a valuable tool for current optimization questions concerning vestibular implants in humans.
A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.
Cieślik, Marcin; Mura, Cameron
2011-02-25
Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.
JTSA: an open source framework for time series abstractions.
Sacchi, Lucia; Capozzi, Davide; Bellazzi, Riccardo; Larizza, Cristiana
2015-10-01
The evaluation of the clinical status of a patient is frequently based on the temporal evolution of some parameters, making the detection of temporal patterns a priority in data analysis. Temporal abstraction (TA) is a methodology widely used in medical reasoning for summarizing and abstracting longitudinal data. This paper describes JTSA (Java Time Series Abstractor), a framework including a library of algorithms for time series preprocessing and abstraction and an engine to execute a workflow for temporal data processing. The JTSA framework is grounded on a comprehensive ontology that models temporal data processing both from the data storage and the abstraction computation perspective. The JTSA framework is designed to allow users to build their own analysis workflows by combining different algorithms. Thanks to the modular structure of a workflow, simple to highly complex patterns can be detected. The JTSA framework has been developed in Java 1.7 and is distributed under GPL as a jar file. JTSA provides: a collection of algorithms to perform temporal abstraction and preprocessing of time series, a framework for defining and executing data analysis workflows based on these algorithms, and a GUI for workflow prototyping and testing. The whole JTSA project relies on a formal model of the data types and of the algorithms included in the library. This model is the basis for the design and implementation of the software application. Taking into account this formalized structure, the user can easily extend the JTSA framework by adding new algorithms. Results are shown in the context of the EU project MOSAIC to extract relevant patterns from data coming related to the long term monitoring of diabetic patients. The proof that JTSA is a versatile tool to be adapted to different needs is given by its possible uses, both as a standalone tool for data summarization and as a module to be embedded into other architectures to select specific phenotypes based on TAs in a large dataset. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi
A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less
Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi; ...
2017-03-13
A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less
Clinical Decision Support Systems (CDSS) for preventive management of COPD patients.
Velickovski, Filip; Ceccaroni, Luigi; Roca, Josep; Burgos, Felip; Galdiz, Juan B; Marina, Nuria; Lluch-Ariet, Magí
2014-11-28
The use of information and communication technologies to manage chronic diseases allows the application of integrated care pathways, and the optimization and standardization of care processes. Decision support tools can assist in the adherence to best-practice medicine in critical decision points during the execution of a care pathway. The objectives are to design, develop, and assess a clinical decision support system (CDSS) offering a suite of services for the early detection and assessment of chronic obstructive pulmonary disease (COPD), which can be easily integrated into a healthcare providers' work-flow. The software architecture model for the CDSS, interoperable clinical-knowledge representation, and inference engine were designed and implemented to form a base CDSS framework. The CDSS functionalities were iteratively developed through requirement-adjustment/development/validation cycles using enterprise-grade software-engineering methodologies and technologies. Within each cycle, clinical-knowledge acquisition was performed by a health-informatics engineer and a clinical-expert team. A suite of decision-support web services for (i) COPD early detection and diagnosis, (ii) spirometry quality-control support, (iii) patient stratification, was deployed in a secured environment on-line. The CDSS diagnostic performance was assessed using a validation set of 323 cases with 90% specificity, and 96% sensitivity. Web services were integrated in existing health information system platforms. Specialized decision support can be offered as a complementary service to existing policies of integrated care for chronic-disease management. The CDSS was able to issue recommendations that have a high degree of accuracy to support COPD case-finding. Integration into healthcare providers' work-flow can be achieved seamlessly through the use of a modular design and service-oriented architecture that connect to existing health information systems.
Clinical Decision Support Systems (CDSS) for preventive management of COPD patients
2014-01-01
Background The use of information and communication technologies to manage chronic diseases allows the application of integrated care pathways, and the optimization and standardization of care processes. Decision support tools can assist in the adherence to best-practice medicine in critical decision points during the execution of a care pathway. Objectives The objectives are to design, develop, and assess a clinical decision support system (CDSS) offering a suite of services for the early detection and assessment of chronic obstructive pulmonary disease (COPD), which can be easily integrated into a healthcare providers' work-flow. Methods The software architecture model for the CDSS, interoperable clinical-knowledge representation, and inference engine were designed and implemented to form a base CDSS framework. The CDSS functionalities were iteratively developed through requirement-adjustment/development/validation cycles using enterprise-grade software-engineering methodologies and technologies. Within each cycle, clinical-knowledge acquisition was performed by a health-informatics engineer and a clinical-expert team. Results A suite of decision-support web services for (i) COPD early detection and diagnosis, (ii) spirometry quality-control support, (iii) patient stratification, was deployed in a secured environment on-line. The CDSS diagnostic performance was assessed using a validation set of 323 cases with 90% specificity, and 96% sensitivity. Web services were integrated in existing health information system platforms. Conclusions Specialized decision support can be offered as a complementary service to existing policies of integrated care for chronic-disease management. The CDSS was able to issue recommendations that have a high degree of accuracy to support COPD case-finding. Integration into healthcare providers' work-flow can be achieved seamlessly through the use of a modular design and service-oriented architecture that connect to existing health information systems. PMID:25471545
Anima: Modular Workflow System for Comprehensive Image Data Analysis
Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa
2014-01-01
Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541
Modularization of genetic elements promotes synthetic metabolic engineering.
Qi, Hao; Li, Bing-Zhi; Zhang, Wen-Qian; Liu, Duo; Yuan, Ying-Jin
2015-11-15
In the context of emerging synthetic biology, metabolic engineering is moving to the next stage powered by new technologies. Systematical modularization of genetic elements makes it more convenient to engineer biological systems for chemical production or other desired purposes. In the past few years, progresses were made in engineering metabolic pathway using synthetic biology tools. Here, we spotlighted the topic of implementation of modularized genetic elements in metabolic engineering. First, we overviewed the principle developed for modularizing genetic elements and then discussed how the genetic modules advanced metabolic engineering studies. Next, we picked up some milestones of engineered metabolic pathway achieved in the past few years. Last, we discussed the rapid raised synthetic biology field of "building a genome" and the potential in metabolic engineering. Copyright © 2015 Elsevier Inc. All rights reserved.
From the desktop to the grid: scalable bioinformatics via workflow conversion.
de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver
2016-03-12
Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.
[Integration of the radiotherapy irradiation planning in the digital workflow].
Röhner, F; Schmucker, M; Henne, K; Momm, F; Bruggmoser, G; Grosu, A-L; Frommhold, H; Heinemann, F E
2013-02-01
At the Clinic of Radiotherapy at the University Hospital Freiburg, all relevant workflow is paperless. After implementing the Operating Schedule System (OSS) as a framework, all processes are being implemented into the departmental system MOSAIQ. Designing a digital workflow for radiotherapy irradiation planning is a large challenge, it requires interdisciplinary expertise and therefore the interfaces between the professions also have to be interdisciplinary. For every single step of radiotherapy irradiation planning, distinct responsibilities have to be defined and documented. All aspects of digital storage, backup and long-term availability of data were considered and have already been realized during the OSS project. After an analysis of the complete workflow and the statutory requirements, a detailed project plan was designed. In an interdisciplinary workgroup, problems were discussed and a detailed flowchart was developed. The new functionalities were implemented in a testing environment by the Clinical and Administrative IT Department (CAI). After extensive tests they were integrated into the new modular department system. The Clinic of Radiotherapy succeeded in realizing a completely digital workflow for radiotherapy irradiation planning. During the testing phase, our digital workflow was examined and afterwards was approved by the responsible authority.
Modularity in developmental biology and artificial organs: a missing concept in tissue engineering.
Lenas, Petros; Luyten, Frank P; Doblare, Manuel; Nicodemou-Lena, Eleni; Lanzara, Andreina Elena
2011-06-01
Tissue engineering is reviving itself, adopting the concept of biomimetics of in vivo tissue development. A basic concept of developmental biology is the modularity of the tissue architecture according to which intermediates in tissue development constitute semiautonomous entities. Both engineering and nature have chosen the modular architecture to optimize the product or organism development and evolution. Bioartificial tissues do not have a modular architecture. On the contrary, artificial organs of modular architecture have been already developed in the field of artificial organs. Therefore the conceptual support of tissue engineering by the field of artificial organs becomes critical in its new endeavor of recapitulating in vitro the in vivo tissue development. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Modular co-culture engineering, a new approach for metabolic engineering.
Zhang, Haoran; Wang, Xiaonan
2016-09-01
With the development of metabolic engineering, employment of a selected microbial host for accommodation of a designed biosynthetic pathway to produce a target compound has achieved tremendous success in the past several decades. Yet, increasing requirements for sophisticated microbial biosynthesis call for establishment and application of more advanced metabolic engineering methodologies. Recently, important progress has been made towards employing more than one engineered microbial strains to constitute synthetic co-cultures and modularizing the biosynthetic labor between the co-culture members in order to improve bioproduction performance. This emerging approach, referred to as modular co-culture engineering in this review, presents a valuable opportunity for expanding the scope of the broad field of metabolic engineering. We highlight representative research accomplishments using this approach, especially those utilizing metabolic engineering tools for microbial co-culture manipulation. Key benefits and major challenges associated with modular co-culture engineering are also presented and discussed. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
A networked modular hardware and software system for MRI-guided robotic prostate interventions
NASA Astrophysics Data System (ADS)
Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.
2012-02-01
Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.
Huser, Vojtech; Rasmussen, Luke V; Oberg, Ryan; Starren, Justin B
2011-04-10
Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform.
Resource Aware Intelligent Network Services (RAINS) Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, Tom; Yang, Xi
The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less
2011-01-01
Background Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. Results We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. Conclusions We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform. PMID:21477364
SECIMTools: a suite of metabolomics data analysis tools.
Kirpich, Alexander S; Ibarra, Miguel; Moskalenko, Oleksandr; Fear, Justin M; Gerken, Joseph; Mi, Xinlei; Ashrafi, Ali; Morse, Alison M; McIntyre, Lauren M
2018-04-20
Metabolomics has the promise to transform the area of personalized medicine with the rapid development of high throughput technology for untargeted analysis of metabolites. Open access, easy to use, analytic tools that are broadly accessible to the biological community need to be developed. While technology used in metabolomics varies, most metabolomics studies have a set of features identified. Galaxy is an open access platform that enables scientists at all levels to interact with big data. Galaxy promotes reproducibility by saving histories and enabling the sharing workflows among scientists. SECIMTools (SouthEast Center for Integrated Metabolomics) is a set of Python applications that are available both as standalone tools and wrapped for use in Galaxy. The suite includes a comprehensive set of quality control metrics (retention time window evaluation and various peak evaluation tools), visualization techniques (hierarchical cluster heatmap, principal component analysis, modular modularity clustering), basic statistical analysis methods (partial least squares - discriminant analysis, analysis of variance, t-test, Kruskal-Wallis non-parametric test), advanced classification methods (random forest, support vector machines), and advanced variable selection tools (least absolute shrinkage and selection operator LASSO and Elastic Net). SECIMTools leverages the Galaxy platform and enables integrated workflows for metabolomics data analysis made from building blocks designed for easy use and interpretability. Standard data formats and a set of utilities allow arbitrary linkages between tools to encourage novel workflow designs. The Galaxy framework enables future data integration for metabolomics studies with other omics data.
Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko
2016-06-01
Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Towards PCC for Concurrent and Distributed Systems (Work in Progress)
NASA Technical Reports Server (NTRS)
Henriksen, Anders S.; Filinski, Andrzej
2009-01-01
We outline some conceptual challenges in extending the PCC paradigm to a concurrent and distributed setting, and sketch a generalized notion of module correctness based on viewing communication contracts as economic games. The model supports compositional reasoning about modular systems and is meant to apply not only to certification of executable code, but also of organizational workflows.
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin; ...
2016-10-06
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
Moenninghoff, Christoph; Umutlu, Lale; Kloeters, Christian; Ringelstein, Adrian; Ladd, Mark E; Sombetzki, Antje; Lauenstein, Thomas C; Forsting, Michael; Schlamann, Marc
2013-06-01
Workflow efficiency and workload of radiological technologists (RTs) were compared in head examinations performed with two 1.5 T magnetic resonance (MR) scanners equipped with or without an automated user interface called "day optimizing throughput" (Dot) workflow engine. Thirty-four patients with known intracranial pathology were examined with a 1.5 T MR scanner with Dot workflow engine (Siemens MAGNETOM Aera) and with a 1.5 T MR scanner with conventional user interface (Siemens MAGNETOM Avanto) using four standardized examination protocols. The elapsed time for all necessary work steps, which were performed by 11 RTs within the total examination time, was compared for each examination at both MR scanners. The RTs evaluated the user-friendliness of both scanners by a questionnaire. Normality of distribution was checked for all continuous variables by use of the Shapiro-Wilk test. Normally distributed variables were analyzed by Student's paired t-test, otherwise Wilcoxon signed-rank test was used to compare means. Total examination time of MR examinations performed with Dot engine was reduced from 24:53 to 20:01 minutes (P < .001) and the necessary RT intervention decreased by 61% (P < .001). The Dot engine's automated choice of MR protocols was significantly better assessed by the RTs than the conventional user interface (P = .001). According to this preliminary study, the Dot workflow engine is a time-saving user assistance software, which decreases the RTs' effort significantly and may help to automate neuroradiological examinations for a higher workflow efficiency. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
Modular Rocket Engine Control Software (MRECS)
NASA Technical Reports Server (NTRS)
Tarrant, C.; Crook, J.
1998-01-01
The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.
Shabo, Amnon; Peleg, Mor; Parimbelli, Enea; Quaglini, Silvana; Napolitano, Carlo
2016-12-07
Implementing a decision-support system within a healthcare organization requires integration of clinical domain knowledge with resource constraints. Computer-interpretable guidelines (CIG) are excellent instruments for addressing clinical aspects while business process management (BPM) languages and Workflow (Wf) engines manage the logistic organizational constraints. Our objective is the orchestration of all the relevant factors needed for a successful execution of patient's care pathways, especially when spanning the continuum of care, from acute to community or home care. We considered three strategies for integrating CIGs with organizational workflows: extending the CIG or BPM languages and their engines, or creating an interplay between them. We used the interplay approach to implement a set of use cases arising from a CIG implementation in the domain of Atrial Fibrillation. To provide a more scalable and standards-based solution, we explored the use of Cross-Enterprise Document Workflow Integration Profile. We describe our proof-of-concept implementation of five use cases. We utilized the Personal Health Record of the MobiGuide project to implement a loosely-coupled approach between the Activiti BPM engine and the Picard CIG engine. Changes in the PHR were detected by polling. IHE profiles were used to develop workflow documents that orchestrate cross-enterprise execution of cardioversion. Interplay between CIG and BPM engines can support orchestration of care flows within organizational settings.
Integrated workflows for spiking neuronal network simulations
Antolík, Ján; Davison, Andrew P.
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902
Integrated workflows for spiking neuronal network simulations.
Antolík, Ján; Davison, Andrew P
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.
Survey of Modular Military Vehicles: Benefits and Burdens
2016-01-01
Survey of Modular Military Vehicles: BENEFITS and BURDENS Jean M. Dasch and David J. Gorsich Modularity in military vehicle design is generally...considered a positive attribute that promotes adaptability, resilience, and cost savings. The benefits and burdens of modularity are considered by...Engineering Center, vehicles were considered based on horizontal modularity , vertical modularity , and distributed modularity . Examples were given for each
Hoekman, Berend; Breitling, Rainer; Suits, Frank; Bischoff, Rainer; Horvatovich, Peter
2012-01-01
Data processing forms an integral part of biomarker discovery and contributes significantly to the ultimate result. To compare and evaluate various publicly available open source label-free data processing workflows, we developed msCompare, a modular framework that allows the arbitrary combination of different feature detection/quantification and alignment/matching algorithms in conjunction with a novel scoring method to evaluate their overall performance. We used msCompare to assess the performance of workflows built from modules of publicly available data processing packages such as SuperHirn, OpenMS, and MZmine and our in-house developed modules on peptide-spiked urine and trypsin-digested cerebrospinal fluid (CSF) samples. We found that the quality of results varied greatly among workflows, and interestingly, heterogeneous combinations of algorithms often performed better than the homogenous workflows. Our scoring method showed that the union of feature matrices of different workflows outperformed the original homogenous workflows in some cases. msCompare is open source software (https://trac.nbic.nl/mscompare), and we provide a web-based data processing service for our framework by integration into the Galaxy server of the Netherlands Bioinformatics Center (http://galaxy.nbic.nl/galaxy) to allow scientists to determine which combination of modules provides the most accurate processing for their particular LC-MS data sets. PMID:22318370
A collection of open source applications for mass spectrometry data mining.
Gallardo, Óscar; Ovelleiro, David; Gay, Marina; Carrascal, Montserrat; Abian, Joaquin
2014-10-01
We present several bioinformatics applications for the identification and quantification of phosphoproteome components by MS. These applications include a front-end graphical user interface that combines several Thermo RAW formats to MASCOT™ Generic Format extractors (EasierMgf), two graphical user interfaces for search engines OMSSA and SEQUEST (OmssaGui and SequestGui), and three applications, one for the management of databases in FASTA format (FastaTools), another for the integration of search results from up to three search engines (Integrator), and another one for the visualization of mass spectra and their corresponding database search results (JsonVisor). These applications were developed to solve some of the common problems found in proteomic and phosphoproteomic data analysis and were integrated in the workflow for data processing and feeding on our LymPHOS database. Applications were designed modularly and can be used standalone. These tools are written in Perl and Python programming languages and are supported on Windows platforms. They are all released under an Open Source Software license and can be freely downloaded from our software repository hosted at GoogleCode. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Atomic force microscopy reveals the mechanical design of a modular protein
Li, Hongbin; Oberhauser, Andres F.; Fowler, Susan B.; Clarke, Jane; Fernandez, Julio M.
2000-01-01
Tandem modular proteins underlie the elasticity of natural adhesives, cell adhesion proteins, and muscle proteins. The fundamental unit of elastic proteins is their individually folded modules. Here, we use protein engineering to construct multimodular proteins composed of Ig modules of different mechanical strength. We examine the mechanical properties of the resulting tandem modular proteins by using single protein atomic force microscopy. We show that by combining modules of known mechanical strength, we can generate proteins with novel elastic properties. Our experiments reveal the simple mechanical design of modular proteins and open the way for the engineering of elastic proteins with defined mechanical properties, which can be used in tissue and fiber engineering. PMID:10823913
Atomic force microscopy reveals the mechanical design of a modular protein.
Li, H; Oberhauser, A F; Fowler, S B; Clarke, J; Fernandez, J M
2000-06-06
Tandem modular proteins underlie the elasticity of natural adhesives, cell adhesion proteins, and muscle proteins. The fundamental unit of elastic proteins is their individually folded modules. Here, we use protein engineering to construct multimodular proteins composed of Ig modules of different mechanical strength. We examine the mechanical properties of the resulting tandem modular proteins by using single protein atomic force microscopy. We show that by combining modules of known mechanical strength, we can generate proteins with novel elastic properties. Our experiments reveal the simple mechanical design of modular proteins and open the way for the engineering of elastic proteins with defined mechanical properties, which can be used in tissue and fiber engineering.
ERIC Educational Resources Information Center
Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin
2016-01-01
In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…
7 CFR Exhibit B to Subpart A of... - Requirements for Modular/Panelized Housing Units
Code of Federal Regulations, 2013 CFR
2013-01-01
... Regional Letters of Acceptance (RLA), Truss Connector Bulletins (TCB): and, Mechanical Engineering... issued by HUD include: Structural Engineering Bulletins (SEB) on a national basis, Area Letters of... Category III housing (modular/panelized housing that does not have to have a Structural Engineering...
7 CFR Exhibit B to Subpart A of... - Requirements for Modular/Panelized Housing Units
Code of Federal Regulations, 2014 CFR
2014-01-01
... Regional Letters of Acceptance (RLA), Truss Connector Bulletins (TCB): and, Mechanical Engineering... issued by HUD include: Structural Engineering Bulletins (SEB) on a national basis, Area Letters of... Category III housing (modular/panelized housing that does not have to have a Structural Engineering...
7 CFR Exhibit B to Subpart A of... - Requirements for Modular/Panelized Housing Units
Code of Federal Regulations, 2012 CFR
2012-01-01
... Regional Letters of Acceptance (RLA), Truss Connector Bulletins (TCB): and, Mechanical Engineering... issued by HUD include: Structural Engineering Bulletins (SEB) on a national basis, Area Letters of... Category III housing (modular/panelized housing that does not have to have a Structural Engineering...
Modular Rocket Engine Control Software (MRECS)
NASA Technical Reports Server (NTRS)
Tarrant, Charlie; Crook, Jerry
1997-01-01
The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.
NASA Astrophysics Data System (ADS)
Cui, Wei; Parker, Laurie L.
2016-07-01
Fluorescent drug screening assays are essential for tyrosine kinase inhibitor discovery. Here we demonstrate a flexible, antibody-free TR-LRET kinase assay strategy that is enabled by the combination of streptavidin-coated quantum dot (QD) acceptors and biotinylated, Tb3+ sensitizing peptide donors. By exploiting the spectral features of Tb3+ and QD, and the high binding affinity of the streptavidin-biotin interaction, we achieved multiplexed detection of kinase activity in a modular fashion without requiring additional covalent labeling of each peptide substrate. This strategy is compatible with high-throughput screening, and should be adaptable to the rapidly changing workflows and targets involved in kinase inhibitor discovery.
Numerical simulations of human tibia osteosynthesis using modular plates based on Nitinol staples.
Tarniţă, Daniela; Tarniţă, D N; Popa, D; Grecu, D; Tarniţă, Roxana; Niculescu, D; Cismaru, F
2010-01-01
The shape memory alloys exhibit a number of remarkable properties, which open new possibilities in engineering and more specifically in biomedical engineering. The most important alloy used in biomedical applications is NiTi. This alloy combines the characteristics of the shape memory effect and superelasticity with excellent corrosion resistance, wear characteristics, mechanical properties and a good biocompatibility. These properties make it an ideal biological engineering material, especially in orthopedic surgery and orthodontics. In this work, modular plates for the osteosynthesis of the long bones fractures are presented. The proposed modular plates are realized from identical modules, completely interchangeable, made of titanium or stainless steel having as connecting elements U-shaped staples made of Nitinol. Using computed tomography (CT) images to provide three-dimensional geometric details and SolidWorks software package, the three dimensional virtual models of the tibia bone and of the modular plates are obtained. The finite element models of the tibia bone and of the modular plate are generated. For numerical simulation, VisualNastran software is used. Finally, displacements diagram, von Misses strain diagram, for the modular plate and for the fractured tibia and modular plate ensemble are obtained.
NASA Technical Reports Server (NTRS)
Jones, Corey; Kapatos, Dennis; Skradski, Cory
2012-01-01
Do you have workflows with many manual tasks that slow down your business? Or, do you scale back workflows because there are simply too many manual tasks? Basic workflow robots can automate some common tasks, but not everything. This presentation will show how advanced robots called "expression robots" can be set up to perform everything from simple tasks such as: moving, creating folders, renaming, changing or creating an attribute, and revising, to more complex tasks like: creating a pdf, or even launching a session of Creo Parametric and performing a specific modeling task. Expression robots are able to utilize the Java API and Info*Engine to do almost anything you can imagine! Best of all, these tools are supported by PTC and will work with later releases of Windchill. Limited knowledge of Java, Info*Engine, and XML are required. The attendee will learn what task expression robots are capable of performing. The attendee will learn what is involved in setting up an expression robot. The attendee will gain a basic understanding of simple Info*Engine tasks
Describing and Modeling Workflow and Information Flow in Chronic Disease Care
Unertl, Kim M.; Weinger, Matthew B.; Johnson, Kevin B.; Lorenzi, Nancy M.
2009-01-01
Objectives The goal of the study was to develop an in-depth understanding of work practices, workflow, and information flow in chronic disease care, to facilitate development of context-appropriate informatics tools. Design The study was conducted over a 10-month period in three ambulatory clinics providing chronic disease care. The authors iteratively collected data using direct observation and semi-structured interviews. Measurements The authors observed all aspects of care in three different chronic disease clinics for over 150 hours, including 157 patient-provider interactions. Observation focused on interactions among people, processes, and technology. Observation data were analyzed through an open coding approach. The authors then developed models of workflow and information flow using Hierarchical Task Analysis and Soft Systems Methodology. The authors also conducted nine semi-structured interviews to confirm and refine the models. Results The study had three primary outcomes: models of workflow for each clinic, models of information flow for each clinic, and an in-depth description of work practices and the role of health information technology (HIT) in the clinics. The authors identified gaps between the existing HIT functionality and the needs of chronic disease providers. Conclusions In response to the analysis of workflow and information flow, the authors developed ten guidelines for design of HIT to support chronic disease care, including recommendations to pursue modular approaches to design that would support disease-specific needs. The study demonstrates the importance of evaluating workflow and information flow in HIT design and implementation. PMID:19717802
Vernick, Kenneth D.
2017-01-01
Metavisitor is a software package that allows biologists and clinicians without specialized bioinformatics expertise to detect and assemble viral genomes from deep sequence datasets. The package is composed of a set of modular bioinformatic tools and workflows that are implemented in the Galaxy framework. Using the graphical Galaxy workflow editor, users with minimal computational skills can use existing Metavisitor workflows or adapt them to suit specific needs by adding or modifying analysis modules. Metavisitor works with DNA, RNA or small RNA sequencing data over a range of read lengths and can use a combination of de novo and guided approaches to assemble genomes from sequencing reads. We show that the software has the potential for quick diagnosis as well as discovery of viruses from a vast array of organisms. Importantly, we provide here executable Metavisitor use cases, which increase the accessibility and transparency of the software, ultimately enabling biologists or clinicians to focus on biological or medical questions. PMID:28045932
Sreedharan, Vipin T; Schultheiss, Sebastian J; Jean, Géraldine; Kahles, André; Bohnert, Regina; Drewe, Philipp; Mudrakarta, Pramod; Görnitz, Nico; Zeller, Georg; Rätsch, Gunnar
2014-05-01
We present Oqtans, an open-source workbench for quantitative transcriptome analysis, that is integrated in Galaxy. Its distinguishing features include customizable computational workflows and a modular pipeline architecture that facilitates comparative assessment of tool and data quality. Oqtans integrates an assortment of machine learning-powered tools into Galaxy, which show superior or equal performance to state-of-the-art tools. Implemented tools comprise a complete transcriptome analysis workflow: short-read alignment, transcript identification/quantification and differential expression analysis. Oqtans and Galaxy facilitate persistent storage, data exchange and documentation of intermediate results and analysis workflows. We illustrate how Oqtans aids the interpretation of data from different experiments in easy to understand use cases. Users can easily create their own workflows and extend Oqtans by integrating specific tools. Oqtans is available as (i) a cloud machine image with a demo instance at cloud.oqtans.org, (ii) a public Galaxy instance at galaxy.cbio.mskcc.org, (iii) a git repository containing all installed software (oqtans.org/git); most of which is also available from (iv) the Galaxy Toolshed and (v) a share string to use along with Galaxy CloudMan.
A Modular Artificial Intelligence Inference Engine System (MAIS) for support of on orbit experiments
NASA Technical Reports Server (NTRS)
Hancock, Thomas M., III
1994-01-01
This paper describes a Modular Artificial Intelligence Inference Engine System (MAIS) support tool that would provide health and status monitoring, cognitive replanning, analysis and support of on-orbit Space Station, Spacelab experiments and systems.
Wang, Ximing; Liu, Brent J; Martinez, Clarisa; Zhang, Xuejun; Winstein, Carolee J
2015-01-01
Imaging based clinical trials can benefit from a solution to efficiently collect, analyze, and distribute multimedia data at various stages within the workflow. Currently, the data management needs of these trials are typically addressed with custom-built systems. However, software development of the custom- built systems for versatile workflows can be resource-consuming. To address these challenges, we present a system with a workflow engine for imaging based clinical trials. The system enables a project coordinator to build a data collection and management system specifically related to study protocol workflow without programming. Web Access to DICOM Objects (WADO) module with novel features is integrated to further facilitate imaging related study. The system was initially evaluated by an imaging based rehabilitation clinical trial. The evaluation shows that the cost of the development of system can be much reduced compared to the custom-built system. By providing a solution to customize a system and automate the workflow, the system will save on development time and reduce errors especially for imaging clinical trials. PMID:25870169
An Auto-management Thesis Program WebMIS Based on Workflow
NASA Astrophysics Data System (ADS)
Chang, Li; Jie, Shi; Weibo, Zhong
An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
RetroPath2.0: A retrosynthesis workflow for metabolic engineers.
Delépine, Baudoin; Duigou, Thomas; Carbonell, Pablo; Faulon, Jean-Loup
2018-01-01
Synthetic biology applied to industrial biotechnology is transforming the way we produce chemicals. However, despite advances in the scale and scope of metabolic engineering, the research and development process still remains costly. In order to expand the chemical repertoire for the production of next generation compounds, a major engineering biology effort is required in the development of novel design tools that target chemical diversity through rapid and predictable protocols. Addressing that goal involves retrosynthesis approaches that explore the chemical biosynthetic space. However, the complexity associated with the large combinatorial retrosynthesis design space has often been recognized as the main challenge hindering the approach. Here, we provide RetroPath2.0, an automated open source workflow for retrosynthesis based on generalized reaction rules that perform the retrosynthesis search from chassis to target through an efficient and well-controlled protocol. Its easiness of use and the versatility of its applications make this tool a valuable addition to the biological engineer bench desk. We show through several examples the application of the workflow to biotechnological relevant problems, including the identification of alternative biosynthetic routes through enzyme promiscuity or the development of biosensors. We demonstrate in that way the ability of the workflow to streamline retrosynthesis pathway design and its major role in reshaping the design, build, test and learn pipeline by driving the process toward the objective of optimizing bioproduction. The RetroPath2.0 workflow is built using tools developed by the bioinformatics and cheminformatics community, because it is open source we anticipate community contributions will likely expand further the features of the workflow. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Gauvin, Robert; Khademhosseini, Ali
2011-01-01
Micro- and nanoscale technologies have emerged as powerful tools in the fabrication of engineered tissues and organs. Here we focus on the application of these techniques to improve engineered tissue architecture and function using modular and directed self-assembly and highlight the emergence of this new class of materials for biomedical applications. PMID:21627163
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC.
Publication of conference presentations include--(1) a brief review of current modular standard development, (2) the statistical status of modular practice, (3) availability of modular products, and (4) educational programs on modular coordination. Included are--(1) explanatory diagrams, (2) text of an open panel discussion, and (3) a list of…
Modular Knowledge Representation and Reasoning in the Semantic Web
NASA Astrophysics Data System (ADS)
Serafini, Luciano; Homola, Martin
Construction of modular ontologies by combining different modules is becoming a necessity in ontology engineering in order to cope with the increasing complexity of the ontologies and the domains they represent. The modular ontology approach takes inspiration from software engineering, where modularization is a widely acknowledged feature. Distributed reasoning is the other side of the coin of modular ontologies: given an ontology comprising of a set of modules, it is desired to perform reasoning by combination of multiple reasoning processes performed locally on each of the modules. In the last ten years, a number of approaches for combining logics has been developed in order to formalize modular ontologies. In this chapter, we survey and compare the main formalisms for modular ontologies and distributed reasoning in the Semantic Web. We select four formalisms build on formal logical grounds of Description Logics: Distributed Description Logics, ℰ-connections, Package-based Description Logics and Integrated Distributed Description Logics. We concentrate on expressivity and distinctive modeling features of each framework. We also discuss reasoning capabilities of each framework.
Modularity Induced Gating and Delays in Neuronal Networks
Shein-Idelson, Mark; Cohen, Gilad; Hanein, Yael
2016-01-01
Neural networks, despite their highly interconnected nature, exhibit distinctly localized and gated activation. Modularity, a distinctive feature of neural networks, has been recently proposed as an important parameter determining the manner by which networks support activity propagation. Here we use an engineered biological model, consisting of engineered rat cortical neurons, to study the role of modular topology in gating the activity between cell populations. We show that pairs of connected modules support conditional propagation (transmitting stronger bursts with higher probability), long delays and propagation asymmetry. Moreover, large modular networks manifest diverse patterns of both local and global activation. Blocking inhibition decreased activity diversity and replaced it with highly consistent transmission patterns. By independently controlling modularity and disinhibition, experimentally and in a model, we pose that modular topology is an important parameter affecting activation localization and is instrumental for population-level gating by disinhibition. PMID:27104350
NASA Astrophysics Data System (ADS)
Agram, P. S.; Gurrola, E. M.; Lavalle, M.; Sacco, G. F.; Rosen, P. A.
2016-12-01
The InSAR Scientific Computing Environment (ISCE) provides both a modular, flexible, and extensible framework for building software components and applications that work together seamlessly as well as a toolbox for processing InSAR data into higher level geodetic image products from a diverse array of radar satellites and aircraft. ISCE easily scales to serve as the SAR processing engine at the core of the NASA JPL Advanced Rapid Imaging and Analysis (ARIA) Center for Natural Hazards as well as a software toolbox for individual scientists working with SAR data. ISCE is planned as the foundational element in processing NISAR data, enabling a new class of analyses that take greater advantage of the long time and large spatial scales of these data. ISCE in ARIA is also a SAR Foundry for development of new processing components and workflows to meet the needs of both large processing centers and individual users. The ISCE framework contains object-oriented Python components layered to construct Python InSAR components that manage legacy Fortran/C InSAR programs. The Python user interface enables both command-line deployment of workflows as well as an interactive "sand box" (the Python interpreter) where scientists can "play" with the data. Recent developments in ISCE include the addition of components to ingest Sentinel-1A SAR data (both stripmap and TOPS-mode) and a new workflow for processing the TOPS-mode data. New components are being developed to exploit polarimetric-SAR data to provide the ecosystem and land-cover/land-use change communities with rigorous and efficient tools to perform multi-temporal, polarimetric and tomographic analyses in order to generate calibrated, geocoded and mosaicked Level-2 and Level-3 products (e.g., maps of above-ground biomass or forest disturbance). ISCE has been downloaded by over 200 users by a license for WinSAR members through the Unavco.org website. Others may apply directly to JPL for a license at download.jpl.nasa.gov.
TAMU: A New Space Mission Operations Paradigm
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Ruszkowski, James; Haensly, Jean; Pennington, Granvil A.; Hogle, Charles
2011-01-01
The Transferable, Adaptable, Modular and Upgradeable (TAMU) Flight Production Process (FPP) is a model-centric System of System (SoS) framework which cuts across multiple organizations and their associated facilities, that are, in the most general case, in geographically diverse locations, to develop the architecture and associated workflow processes for a broad range of mission operations. Further, TAMU FPP envisions the simulation, automatic execution and re-planning of orchestrated workflow processes as they become operational. This paper provides the vision for the TAMU FPP paradigm. This includes a complete, coherent technique, process and tool set that result in an infrastructure that can be used for full lifecycle design and decision making during any flight production process. A flight production process is the process of developing all products that are necessary for flight.
Business process re-engineering a cardiology department.
Bakshi, Syed Murtuza Hussain
2014-01-01
The health care sector is the world's third largest industry and is facing several problems such as excessive waiting times for patients, lack of access to information, high costs of delivery and medical errors. Health care managers seek the help of process re-engineering methods to discover the best processes and to re-engineer existing processes to optimize productivity without compromising on quality. Business process re-engineering refers to the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality and speed. The present study is carried out at a tertiary care corporate hospital with 1000-plus-bed facility. A descriptive study and case study method is used with intensive, careful and complete observation of patient flow, delays, short comings in patient movement and workflow. Data is collected through observations, informal interviews and analyzed by matrix analysis. Flowcharts were drawn for the various work activities of the cardiology department including workflow of the admission process, workflow in the ward and ICCU, workflow of the patient for catheterization laboratory procedure, and in the billing and discharge process. The problems of the existing system were studied and necessary suggestions were recommended to cardiology department module with an illustrated flowchart.
KNIME4NGS: a comprehensive toolbox for next generation sequencing analysis.
Hastreiter, Maximilian; Jeske, Tim; Hoser, Jonathan; Kluge, Michael; Ahomaa, Kaarin; Friedl, Marie-Sophie; Kopetzky, Sebastian J; Quell, Jan-Dominik; Mewes, H Werner; Küffner, Robert
2017-05-15
Analysis of Next Generation Sequencing (NGS) data requires the processing of large datasets by chaining various tools with complex input and output formats. In order to automate data analysis, we propose to standardize NGS tasks into modular workflows. This simplifies reliable handling and processing of NGS data, and corresponding solutions become substantially more reproducible and easier to maintain. Here, we present a documented, linux-based, toolbox of 42 processing modules that are combined to construct workflows facilitating a variety of tasks such as DNAseq and RNAseq analysis. We also describe important technical extensions. The high throughput executor (HTE) helps to increase the reliability and to reduce manual interventions when processing complex datasets. We also provide a dedicated binary manager that assists users in obtaining the modules' executables and keeping them up to date. As basis for this actively developed toolbox we use the workflow management software KNIME. See http://ibisngs.github.io/knime4ngs for nodes and user manual (GPLv3 license). robert.kueffner@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience
Stockton, David B.; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.
Stockton, David B; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.
A modular approach to creating large engineered cartilage surfaces.
Ford, Audrey C; Chui, Wan Fung; Zeng, Anne Y; Nandy, Aditya; Liebenberg, Ellen; Carraro, Carlo; Kazakia, Galateia; Alliston, Tamara; O'Connell, Grace D
2018-01-23
Native articular cartilage has limited capacity to repair itself from focal defects or osteoarthritis. Tissue engineering has provided a promising biological treatment strategy that is currently being evaluated in clinical trials. However, current approaches in translating these techniques to developing large engineered tissues remains a significant challenge. In this study, we present a method for developing large-scale engineered cartilage surfaces through modular fabrication. Modular Engineered Tissue Surfaces (METS) uses the well-known, but largely under-utilized self-adhesion properties of de novo tissue to create large scaffolds with nutrient channels. Compressive mechanical properties were evaluated throughout METS specimens, and the tensile mechanical strength of the bonds between attached constructs was evaluated over time. Raman spectroscopy, biochemical assays, and histology were performed to investigate matrix distribution. Results showed that by Day 14, stable connections had formed between the constructs in the METS samples. By Day 21, bonds were robust enough to form a rigid sheet and continued to increase in size and strength over time. Compressive mechanical properties and glycosaminoglycan (GAG) content of METS and individual constructs increased significantly over time. The METS technique builds on established tissue engineering accomplishments of developing constructs with GAG composition and compressive properties approaching native cartilage. This study demonstrated that modular fabrication is a viable technique for creating large-scale engineered cartilage, which can be broadly applied to many tissue engineering applications and construct geometries. Copyright © 2017 Elsevier Ltd. All rights reserved.
2016-12-27
2015 Approved for public release; distribution is unlimited U.S. Army Natick Soldier Research, Development and Engineering Center...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...MODULAR LIGHTWEIGHT LOAD CARRYING EQUIPMENT) HUMAN FACTORS ENGINEERING U.S. Army Natick Soldier Research, Development and Engineering Center ATTN
Modular Software for Spacecraft Navigation Using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Truong, S. H.; Hartman, K. R.; Weidow, D. A.; Berry, D. L.; Oza, D. H.; Long, A. C.; Joyce, E.; Steger, W. L.
1996-01-01
The Goddard Space Flight Center Flight Dynamics and Mission Operations Divisions have jointly investigated the feasibility of engineering modular Global Positioning SYSTEM (GPS) navigation software to support both real time flight and ground postprocessing configurations. The goals of this effort are to define standard GPS data interfaces and to engineer standard, reusable navigation software components that can be used to build a broad range of GPS navigation support applications. The paper discusses the GPS modular software (GMOD) system and operations concepts, major requirements, candidate software architecture, feasibility assessment and recommended software interface standards. In additon, ongoing efforts to broaden the scope of the initial study and to develop modular software to support autonomous navigation using GPS are addressed,
Cladé, Thierry; Snyder, Joshua C.
2010-01-01
Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775
Zuo, Yicong; Liu, Xiaolu; Wei, Dan; Sun, Jing; Xiao, Wenqian; Zhao, Huan; Guo, Likun; Wei, Qingrong; Fan, Hongsong; Zhang, Xingdong
2015-05-20
Modular tissue engineering holds great potential in regenerating natural complex tissues by engineering three-dimensional modular scaffolds with predefined geometry and biological characters. In modular tissue-like construction, a scaffold with an appropriate mechanical rigidity for assembling fabrication and high biocompatibility for cell survival is the key to the successful bioconstruction. In this work, a series of composite hydrogels (GH0, GH1, GH2, and GH3) based on a combination of methacrylated gelatin (GelMA) and hydroxyapatite (HA) was exploited to enhance hydrogel mechanical rigidity and promote cell functional expression for osteon biofabrication. These composite hydrogels presented a lower swelling ratio, higher mechanical moduli, and better biocompatibility when compared to the pure GelMA hydrogel. Furthermore, on the basis of the composite hydrogel and photolithograph technology, we successfully constructed an osteon-like concentric double-ring structure in which the inner ring encapsulating human umbilical vascular endothelial cells (HUVECs) was designed to imitate blood vessel tubule while the outer ring encapsulating human osteoblast-like cells (MG63s) acts as part of bone. During the coculture period, MG63s and HUVECs exhibited not only satisfying growth status but also the enhanced genic expression of osteogenesis-related and angiogenesis-related differentiations. These results demonstrate this GelMA-HA composite hydrogel system is promising for modular tissue engineering.
Modular Engine Instrumentation System
NASA Technical Reports Server (NTRS)
Rice, W. J.; Birchenough, A. G.
1982-01-01
System that provides information and measurements never obtained before in real time has been developed. System shows not only real-time measurements but also results of computations of key combustion parameters in meaningful and easily understood display. Standard commercially-available shaft encoder plus data from pressure transducer act as principal drivers to device. Eventually, modular system could be developed into onboard controller for automobile engines.
A Modular Aero-Propulsion System Simulation of a Large Commercial Aircraft Engine
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.; Litt, Jonathan S.; Frederick, Dean K.
2008-01-01
A simulation of a commercial engine has been developed in a graphical environment to meet the increasing need across the controls and health management community for a common research and development platform. This paper describes the Commercial Modular Aero Propulsion System Simulation (C-MAPSS), which is representative of a 90,000-lb thrust class two spool, high bypass ratio commercial turbofan engine. A control law resembling the state-of-the-art on board modern aircraft engines is included, consisting of a fan-speed control loop supplemented by relevant engine limit protection regulator loops. The objective of this paper is to provide a top-down overview of the complete engine simulation package.
Anggraeni, Melisa R; Connors, Natalie K; Wu, Yang; Chuan, Yap P; Lua, Linda H L; Middelberg, Anton P J
2013-09-13
Biomolecular engineering enables synthesis of improved proteins through synergistic fusion of modules from unrelated biomolecules. Modularization of peptide antigen from an unrelated pathogen for presentation on a modular virus-like particle (VLP) represents a new and promising approach to synthesize safe and efficacious vaccines. Addressing a key knowledge gap in modular VLP engineering, this study investigates the underlying fundamentals affecting the ability of induced antibodies to recognize the native pathogen. Specifically, this quality of immune response is correlated to the peptide antigen module structure. We modularized a helical peptide antigen element, helix 190 (H190) from the influenza hemagglutinin (HA) receptor binding region, for presentation on murine polyomavirus VLP, using two strategies aimed to promote H190 helicity on the VLP. In the first strategy, H190 was flanked by GCN4 structure-promoting elements within the antigen module; in the second, dual H190 copies were arrayed as tandem repeats in the module. Molecular dynamics simulation predicted that tandem repeat arraying would minimize secondary structural deviation of modularized H190 from its native conformation. In vivo testing supported this finding, showing that although both modularization strategies conferred high H190-specific immunogenicity, tandem repeat arraying of H190 led to a strikingly higher immune response quality, as measured by ability to generate antibodies recognizing a recombinant HA domain and split influenza virion. These findings provide new insights into the rational engineering of VLP vaccines, and could ultimately enable safe and efficacious vaccine design as an alternative to conventional approaches necessitating pathogen cultivation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modular Heat Exchanger With Integral Heat Pipe
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.
1992-01-01
Modular heat exchanger with integral heat pipe transports heat from source to Stirling engine. Alternative to heat exchangers depending on integrities of thousands of brazed joints, contains only 40 brazed tubes.
Leaf LIMS: A Flexible Laboratory Information Management System with a Synthetic Biology Focus.
Craig, Thomas; Holland, Richard; D'Amore, Rosalinda; Johnson, James R; McCue, Hannah V; West, Anthony; Zulkower, Valentin; Tekotte, Hille; Cai, Yizhi; Swan, Daniel; Davey, Robert P; Hertz-Fowler, Christiane; Hall, Anthony; Caddick, Mark
2017-12-15
This paper presents Leaf LIMS, a flexible laboratory information management system (LIMS) designed to address the complexity of synthetic biology workflows. At the project's inception there was a lack of a LIMS designed specifically to address synthetic biology processes, with most systems focused on either next generation sequencing or biobanks and clinical sample handling. Leaf LIMS implements integrated project, item, and laboratory stock tracking, offering complete sample and construct genealogy, materials and lot tracking, and modular assay data capture. Hence, it enables highly configurable task-based workflows and supports data capture from project inception to completion. As such, in addition to it supporting synthetic biology it is ideal for many laboratory environments with multiple projects and users. The system is deployed as a web application through Docker and is provided under a permissive MIT license. It is freely available for download at https://leaflims.github.io .
Semantic Web Service Delivery in Healthcare Based on Functional and Non-Functional Properties.
Schweitzer, Marco; Gorfer, Thilo; Hörbst, Alexander
2017-01-01
In the past decades, a lot of endeavor has been made on the trans-institutional exchange of healthcare data through electronic health records (EHR) in order to obtain a lifelong, shared accessible health record of a patient. Besides basic information exchange, there is a growing need for Information and Communication Technology (ICT) to support the use of the collected health data in an individual, case-specific workflow-based manner. This paper presents the results on how workflows can be used to process data from electronic health records, following a semantic web service approach that enables automatic discovery, composition and invocation of suitable web services. Based on this solution, the user (physician) can define its needs from a domain-specific perspective, whereas the ICT-system fulfills those needs with modular web services. By involving also non-functional properties for the service selection, this approach is even more suitable for the dynamic medical domain.
NASA Astrophysics Data System (ADS)
Harris, A. T.; Ramachandran, R.; Maskey, M.
2013-12-01
The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.
Workflow management systems in radiology
NASA Astrophysics Data System (ADS)
Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim
1998-07-01
In a situation of shrinking health care budgets, increasing cost pressure and growing demands to increase the efficiency and the quality of medical services, health care enterprises are forced to optimize or complete re-design their processes. Although information technology is agreed to potentially contribute to cost reduction and efficiency improvement, the real success factors are the re-definition and automation of processes: Business Process Re-engineering and Workflow Management. In this paper we discuss architectures for the use of workflow management systems in radiology. We propose to move forward from information systems in radiology (RIS, PACS) to Radiology Management Systems, in which workflow functionality (process definitions and process automation) is implemented through autonomous workflow management systems (WfMS). In a workflow oriented architecture, an autonomous workflow enactment service communicates with workflow client applications via standardized interfaces. In this paper, we discuss the need for and the benefits of such an approach. The separation of workflow management system and application systems is emphasized, and the consequences that arise for the architecture of workflow oriented information systems. This includes an appropriate workflow terminology, and the definition of standard interfaces for workflow aware application systems. Workflow studies in various institutions have shown that most of the processes in radiology are well structured and suited for a workflow management approach. Numerous commercially available Workflow Management Systems (WfMS) were investigated, and some of them, which are process- oriented and application independent, appear suitable for use in radiology.
Wang, Baojun; Barahona, Mauricio; Buck, Martin
2013-01-01
Cells perceive a wide variety of cellular and environmental signals, which are often processed combinatorially to generate particular phenotypic responses. Here, we employ both single and mixed cell type populations, pre-programmed with engineered modular cell signalling and sensing circuits, as processing units to detect and integrate multiple environmental signals. Based on an engineered modular genetic AND logic gate, we report the construction of a set of scalable synthetic microbe-based biosensors comprising exchangeable sensory, signal processing and actuation modules. These cellular biosensors were engineered using distinct signalling sensory modules to precisely identify various chemical signals, and combinations thereof, with a quantitative fluorescent output. The genetic logic gate used can function as a biological filter and an amplifier to enhance the sensing selectivity and sensitivity of cell-based biosensors. In particular, an Escherichia coli consortium-based biosensor has been constructed that can detect and integrate three environmental signals (arsenic, mercury and copper ion levels) via either its native two-component signal transduction pathways or synthetic signalling sensors derived from other bacteria in combination with a cell-cell communication module. We demonstrate how a modular cell-based biosensor can be engineered predictably using exchangeable synthetic gene circuit modules to sense and integrate multiple-input signals. This study illustrates some of the key practical design principles required for the future application of these biosensors in broad environmental and healthcare areas. PMID:22981411
The Modular Aero-Propulsion System Simulation (MAPSS) Users' Guide
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Melcher, Kevin J.
2004-01-01
The Modular Aero-Propulsion System Simulation is a flexible turbofan engine simulation environment that provides the user a platform to develop advanced control algorithms. It is capable of testing the performance of control designs on a validated and verified generic engine model. In addition, it is able to generate state-space linear models of the engine model to aid in controller design. The engine model used in MAPSS is a generic high-pressure ratio, dual-spool, lowbypass, military-type, variable cycle turbofan engine with a digital controller. MAPSS is controlled by a graphical user interface (GUI) and this guide explains how to use it to take advantage of the capabilities of MAPSS.
Engineering modular polyketide synthases for production of biofuels and industrial chemicals.
Cai, Wenlong; Zhang, Wenjun
2018-04-01
Polyketide synthases (PKSs) are one of the most profound biosynthetic factories for producing polyketides with diverse structures and biological activities. These enzymes have been historically studied and engineered to make un-natural polyketides for drug discovery, and have also recently been explored for synthesizing biofuels and industrial chemicals due to their versatility and customizability. Here, we review recent advances in the mechanistic understanding and engineering of modular PKSs for producing polyketide-derived chemicals, and provide perspectives on this relatively new application of PKSs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Engineering modular ester fermentative pathways in Escherichia coli.
Layton, Donovan S; Trinh, Cong T
2014-11-01
Sensation profiles are observed all around us and are made up of many different molecules, such as esters. These profiles can be mimicked in everyday items for their uses in foods, beverages, cosmetics, perfumes, solvents, and biofuels. Here, we developed a systematic 'natural' way to derive these products via fermentative biosynthesis. Each ester fermentative pathway was designed as an exchangeable ester production module for generating two precursors- alcohols and acyl-CoAs that were condensed by an alcohol acyltransferase to produce a combinatorial library of unique esters. As a proof-of-principle, we coupled these ester modules with an engineered, modular, Escherichia coli chassis in a plug-and-play fashion to create microbial cell factories for enhanced anaerobic production of a butyrate ester library. We demonstrated tight coupling between the modular chassis and ester modules for enhanced product biosynthesis, an engineered phenotype useful for directed metabolic pathway evolution. Compared to the wildtype, the engineered cell factories yielded up to 48 fold increase in butyrate ester production from glucose. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Prototype of Kepler Processing Workflows For Microscopy And Neuroinformatics
Astakhov, V.; Bandrowski, A.; Gupta, A.; Kulungowski, A.W.; Grethe, J.S.; Bouwer, J.; Molina, T.; Rowley, V.; Penticoff, S.; Terada, M.; Wong, W.; Hakozaki, H.; Kwon, O.; Martone, M.E.; Ellisman, M.
2016-01-01
We report on progress of employing the Kepler workflow engine to prototype “end-to-end” application integration workflows that concern data coming from microscopes deployed at the National Center for Microscopy Imaging Research (NCMIR). This system is built upon the mature code base of the Cell Centered Database (CCDB) and integrated rule-oriented data system (IRODS) for distributed storage. It provides integration with external projects such as the Whole Brain Catalog (WBC) and Neuroscience Information Framework (NIF), which benefit from NCMIR data. We also report on specific workflows which spawn from main workflows and perform data fusion and orchestration of Web services specific for the NIF project. This “Brain data flow” presents a user with categorized information about sources that have information on various brain regions. PMID:28479932
[Development of a medical equipment support information system based on PDF portable document].
Cheng, Jiangbo; Wang, Weidong
2010-07-01
According to the organizational structure and management system of the hospital medical engineering support, integrate medical engineering support workflow to ensure the medical engineering data effectively, accurately and comprehensively collected and kept in electronic archives. Analyse workflow of the medical, equipment support work and record all work processes by the portable electronic document. Using XML middleware technology and SQL Server database, complete process management, data calculation, submission, storage and other functions. The practical application shows that the medical equipment support information system optimizes the existing work process, standardized and digital, automatic and efficient orderly and controllable. The medical equipment support information system based on portable electronic document can effectively optimize and improve hospital medical engineering support work, improve performance, reduce costs, and provide full and accurate digital data
NASA Astrophysics Data System (ADS)
Lengyel, F.; Yang, P.; Rosenzweig, B.; Vorosmarty, C. J.
2012-12-01
The Northeast Regional Earth System Model (NE-RESM, NSF Award #1049181) integrates weather research and forecasting models, terrestrial and aquatic ecosystem models, a water balance/transport model, and mesoscale and energy systems input-out economic models developed by interdisciplinary research team from academia and government with expertise in physics, biogeochemistry, engineering, energy, economics, and policy. NE-RESM is intended to forecast the implications of planning decisions on the region's environment, ecosystem services, energy systems and economy through the 21st century. Integration of model components and the development of cyberinfrastructure for interacting with the system is facilitated with the integrated Rule Oriented Data System (iRODS), a distributed data grid that provides archival storage with metadata facilities and a rule-based workflow engine for automating and auditing scientific workflows.
Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
A Modular Aerospike Engine Design Using Additive Manufacturing
NASA Technical Reports Server (NTRS)
Peugeot, John; Garcia, Chance; Burkhardt, Wendel
2014-01-01
A modular aerospike engine concept has been developed with the objective of demonstrating the viability of the aerospike design using additive manufacturing techniques. The aerospike system is a self-compensating design that allows for optimal performance over the entire flight regime and allows for the lowest possible mass vehicle designs. At low altitudes, improvements in Isp can be traded against chamber pressure, staging, and payload. In upper stage applications, expansion ratio and engine envelope can be traded against nozzle efficiency. These features provide flexibility to the System Designer optimizing a complete vehicle stage. The aerospike concept is a good example of a component that has demonstrated improved performance capability, but traditionally has manufacturing requirements that are too expensive and complex to use in a production vehicle. In recent years, additive manufacturing has emerged as a potential method for improving the speed and cost of building geometrically complex components in rocket engines. It offers a reduction in tooling overhead and significant improvements in the integration of the designer and manufacturing method. In addition, the modularity of the engine design provides the ability to perform full scale testing on the combustion devices outside of the full engine configuration. The proposed design uses a hydrocarbon based gas-generator cycle, with plans to take advantage of existing powerhead hardware while focusing DDT&E resources on manufacturing and sub-system testing of the combustion devices. The major risks for the modular aerospike concept lie in the performance of the propellant feed system, the structural integrity of the additive manufactured components, and the aerodynamic efficiency of the exhaust flow.
Modular digital holographic fringe data processing system
NASA Technical Reports Server (NTRS)
Downward, J. G.; Vavra, P. C.; Schebor, F. S.; Vest, C. M.
1985-01-01
A software architecture suitable for reducing holographic fringe data into useful engineering data is developed and tested. The results, along with a detailed description of the proposed architecture for a Modular Digital Fringe Analysis System, are presented.
Game engines and immersive displays
NASA Astrophysics Data System (ADS)
Chang, Benjamin; Destefano, Marc
2014-02-01
While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.
NASA Astrophysics Data System (ADS)
Ferreira da Silva, R.; Filgueira, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present Asterism, an open source data-intensive framework, which combines the Pegasus and dispel4py workflow systems. Asterism aims to simplify the effort required to develop data-intensive applications that run across multiple heterogeneous resources, without users having to: re-formulate their methods according to different enactment systems; manage the data distribution across systems; parallelize their methods; co-place and schedule their methods with computing resources; and store and transfer large/small volumes of data. Asterism's key element is to leverage the strengths of each workflow system: dispel4py allows developing scientific applications locally and then automatically parallelize and scale them on a wide range of HPC infrastructures with no changes to the application's code; Pegasus orchestrates the distributed execution of applications while providing portability, automated data management, recovery, debugging, and monitoring, without users needing to worry about the particulars of the target execution systems. Asterism leverages the level of abstractions provided by each workflow system to describe hybrid workflows where no information about the underlying infrastructure is required beforehand. The feasibility of Asterism has been evaluated using the seismic ambient noise cross-correlation application, a common data-intensive analysis pattern used by many seismologists. The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The Asterism workflow is implemented as a Pegasus workflow composed of two tasks (Phase1 and Phase2), where each phase represents a dispel4py workflow. Pegasus tasks describe the in/output data at a logical level, the data dependency between tasks, and the e-Infrastructures and the execution engine to run each dispel4py workflow. We have instantiated the workflow using data from 1000 stations from the IRIS services, and run it across two heterogeneous resources described as Docker containers: MPI (Container2) and Storm (Container3) clusters (Figure 1). Each dispel4py workflow is mapped to a particular execution engine, and data transfers between resources are automatically handled by Pegasus. Asterism is freely available online at http://github.com/dispel4py/pegasus_dispel4py.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duro, Francisco Rodrigo; Blas, Javier Garcia; Isaila, Florin
The increasing volume of scientific data and the limited scalability and performance of storage systems are currently presenting a significant limitation for the productivity of the scientific workflows running on both high-performance computing (HPC) and cloud platforms. Clearly needed is better integration of storage systems and workflow engines to address this problem. This paper presents and evaluates a novel solution that leverages codesign principles for integrating Hercules—an in-memory data store—with a workflow management system. We consider four main aspects: workflow representation, task scheduling, task placement, and task termination. As a result, the experimental evaluation on both cloud and HPC systemsmore » demonstrates significant performance and scalability improvements over existing state-of-the-art approaches.« less
CMS Configuration Editor: GUI based application for user analysis job
NASA Astrophysics Data System (ADS)
de Cosa, A.
2011-12-01
We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.
A WorkFlow Engine Oriented Modeling System for Hydrologic Sciences
NASA Astrophysics Data System (ADS)
Lu, B.; Piasecki, M.
2009-12-01
In recent years the use of workflow engines for carrying out modeling and data analyses tasks has gained increased attention in the science and engineering communities. Tasks like processing raw data coming from sensors and passing these raw data streams to filters for QA/QC procedures possibly require multiple and complicated steps that need to be repeated over and over again. A workflow sequence that carries out a number of steps of various complexity is an ideal approach to deal with these tasks because the sequence can be stored, called up and repeated over again and again. This has several advantages: for one it ensures repeatability of processing steps and with that provenance, an issue that is increasingly important in the science and engineering communities. It also permits the hand off of lengthy and time consuming tasks that can be error prone to a chain of processing actions that are carried out automatically thus reducing the chance for error on the one side and freeing up time to carry out other tasks on the other hand. This paper aims to present the development of a workflow engine embedded modeling system which allows to build up working sequences for carrying out numerical modeling tasks regarding to hydrologic science. Trident, which facilitates creating, running and sharing scientific data analysis workflows, is taken as the central working engine of the modeling system. Current existing functionalities of the modeling system involve digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. They are stored as sequences or modules respectively. The sequences can be invoked to implement their preset tasks in orders, for example, triangulating a watershed from raw DEM. Whereas the modules encapsulated certain functions can be selected and connected through a GUI workboard to form sequences. This modeling system is demonstrated by setting up a new sequence for simulating rainfall-runoff processes which involves embedded Penn State Integrated Hydrologic Model(PIHM) module for hydrologic simulation as a kernel, DEM processing sub-sequence which prepares geospatial data for PIHM, data retrieval module which access time series data from online data repository via web services or from local database, post- data management module which stores , visualizes and analyzes model outputs.
Procedural Modeling for Rapid-Prototyping of Multiple Building Phases
NASA Astrophysics Data System (ADS)
Saldana, M.; Johanson, C.
2013-02-01
RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.
Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.
2017-10-01
A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.
Fluid design studies of integrated modular engine system
NASA Technical Reports Server (NTRS)
Frankenfield, Bruce; Carek, Jerry
1993-01-01
A study was performed to develop a fluid system design and show the feasibility of constructing an integrated modular engine (IME) configuration, using an expander cycle engine. The primary design goal of the IME configuration was to improve the propulsion system reliability. The IME fluid system was designed as a single fault tolerant system, while minimizing the required fluid components. This study addresses the design of the high pressure manifolds, turbopumps and thrust chambers for the IME configuration. A physical layout drawing was made, which located each of the fluid system components, manifolds and thrust chambers. Finally, a comparison was made between the fluid system designs of an IME system and a non-network (clustered) engine system.
ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows
NASA Technical Reports Server (NTRS)
McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush
2004-01-01
With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.
Tapie, L; Lebon, N; Mawussi, B; Fron Chabouis, H; Duret, F; Attal, J-P
2015-01-01
As digital technology infiltrates every area of daily life, including the field of medicine, so it is increasingly being introduced into dental practice. Apart from chairside practice, computer-aided design/computer-aided manufacturing (CAD/CAM) solutions are available for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental solutions can be considered a chain of digital devices and software for the almost automatic design and creation of dental restorations. However, dentists who want to use the technology often do not have the time or knowledge to understand it. A basic knowledge of the CAD/CAM digital workflow for dental restorations can help dentists to grasp the technology and purchase a CAM/CAM system that meets the needs of their office. This article provides a computer-science and mechanical-engineering approach to the CAD/CAM digital workflow to help dentists understand the technology.
Gene Composer: database software for protein construct design, codon engineering, and gene synthesis
Lorimer, Don; Raymond, Amy; Walchli, John; Mixon, Mark; Barrow, Adrienne; Wallace, Ellen; Grice, Rena; Burgin, Alex; Stewart, Lance
2009-01-01
Background To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. Results An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. Conclusion We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene assembly procedure with mis-match specific endonuclease error correction in combination with PIPE cloning. In a sister manuscript we present data on how Gene Composer designed genes and protein constructs can result in improved protein production for structural studies. PMID:19383142
Lorimer, Don; Raymond, Amy; Walchli, John; Mixon, Mark; Barrow, Adrienne; Wallace, Ellen; Grice, Rena; Burgin, Alex; Stewart, Lance
2009-04-21
To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene assembly procedure with mis-match specific endonuclease error correction in combination with PIPE cloning. In a sister manuscript we present data on how Gene Composer designed genes and protein constructs can result in improved protein production for structural studies.
de Bruin, Jeroen S; Adlassnig, Klaus-Peter; Leitich, Harald; Rappelsberger, Andrea
2018-01-01
Evidence-based clinical guidelines have a major positive effect on the physician's decision-making process. Computer-executable clinical guidelines allow for automated guideline marshalling during a clinical diagnostic process, thus improving the decision-making process. Implementation of a digital clinical guideline for the prevention of mother-to-child transmission of hepatitis B as a computerized workflow, thereby separating business logic from medical knowledge and decision-making. We used the Business Process Model and Notation language system Activiti for business logic and workflow modeling. Medical decision-making was performed by an Arden-Syntax-based medical rule engine, which is part of the ARDENSUITE software. We succeeded in creating an electronic clinical workflow for the prevention of mother-to-child transmission of hepatitis B, where institution-specific medical decision-making processes could be adapted without modifying the workflow business logic. Separation of business logic and medical decision-making results in more easily reusable electronic clinical workflows.
Rational Modular RNA Engineering Based on In Vivo Profiling of Structural Accessibility.
Leistra, Abigail N; Amador, Paul; Buvanendiran, Aishwarya; Moon-Walker, Alex; Contreras, Lydia M
2017-12-15
Bacterial small RNAs (sRNAs) have been established as powerful parts for controlling gene expression. However, development and application of engineered sRNAs has primarily focused on regulating novel synthetic targets. In this work, we demonstrate a rational modular RNA engineering approach that uses in vivo structural accessibility measurements to tune the regulatory activity of a multisubstrate sRNA for differential control of its native target network. Employing the CsrB global sRNA regulator as a model system, we use published in vivo structural accessibility data to infer the contribution of its local structures (substructures) to function and select a subset for engineering. We then modularly recombine the selected substructures, differentially representing those of presumed high or low functional contribution, to build a library of 21 CsrB variants. Using fluorescent translational reporter assays, we demonstrate that the CsrB variants achieve a 5-fold gradient of control of well-characterized Csr network targets. Interestingly, results suggest that less conserved local structures within long, multisubstrate sRNAs may represent better targets for rational engineering than their well-conserved counterparts. Lastly, mapping the impact of sRNA variants on a signature Csr network phenotype indicates the potential of this approach for tuning the activity of global sRNA regulators in the context of metabolic engineering applications.
NASA Technical Reports Server (NTRS)
Obrien, Charles J.
1993-01-01
Existing NASA research contracts are supporting development of advanced reinforced polymer and metal matrix composites for use in liquid rocket engines of the future. Advanced rocket propulsion concepts, such as modular platelet engines, dual-fuel dual-expander engines, and variable mixture ratio engines, require advanced materials and structures to reduce overall vehicle weight as well as address specific propulsion system problems related to elevated operating temperatures, new engine components, and unique operating processes. High performance propulsion systems with improved manufacturability and maintainability are needed for single stage to orbit vehicles and other high performance mission applications. One way to satisfy these needs is to develop a small engine which can be clustered in modules to provide required levels of total thrust. This approach should reduce development schedule and cost requirements by lowering hardware lead times and permitting the use of existing test facilities. Modular engines should also reduce operational costs associated with maintenance and parts inventories.
The Keller Plan: A Successful Experiment in Engineering Education.
ERIC Educational Resources Information Center
Koen, Billy; And Others
1985-01-01
Discusses the Keller Plan or personalized system of instruction (PSI), a mastery-oriented, self-paced, modular teaching strategy using student/peer proctors. Success for PSI in chemical engineering, operations research, electrical engineering, and nuclear engineering courses is explained. (DH)
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Hufnagel, P.; Glandorf, J.; Körting, G.; Jabs, W.; Schweiger-Hufnagel, U.; Hahner, S.; Lubeck, M.; Suckau, D.
2007-01-01
Analysis of complex proteomes often results in long protein lists, but falls short in measuring the validity of identification and quantification results on a greater number of proteins. Biological and technical replicates are mandatory, as is the combination of the MS data from various workflows (gels, 1D-LC, 2D-LC), instruments (TOF/TOF, trap, qTOF or FTMS), and search engines. We describe a database-driven study that combines two workflows, two mass spectrometers, and four search engines with protein identification following a decoy database strategy. The sample was a tryptically digested lysate (10,000 cells) of a human colorectal cancer cell line. Data from two LC-MALDI-TOF/TOF runs and a 2D-LC-ESI-trap run using capillary and nano-LC columns were submitted to the proteomics software platform ProteinScape. The combined MALDI data and the ESI data were searched using Mascot (Matrix Science), Phenyx (GeneBio), ProteinSolver (Bruker and Protagen), and Sequest (Thermo) against a decoy database generated from IPI-human in order to obtain one protein list across all workflows and search engines at a defined maximum false-positive rate of 5%. ProteinScape combined the data to one LC-MALDI and one LC-ESI dataset. The initial separate searches from the two combined datasets generated eight independent peptide lists. These were compiled into an integrated protein list using the ProteinExtractor algorithm. An initial evaluation of the generated data led to the identification of approximately 1200 proteins. Result integration on a peptide level allowed discrimination of protein isoforms that would not have been possible with a mere combination of protein lists.
Competency Based Modular Experiments in Polymer Science and Technology.
ERIC Educational Resources Information Center
Pearce, Eli M; And Others
1980-01-01
Describes a competency-based, modular laboratory course emphasizing the synthesis and characterization of polymers and directed toward senior undergraduate and/or first-year graduate students in science and engineering. One module, free-radical polymerization kinetics by dilatometry, is included as a sample. (CS)
Modular and Spatially Explicit: A Novel Approach to System Dynamics
The Open Modeling Environment (OME) is an open-source System Dynamics (SD) simulation engine which has been created as a joint project between Oregon State University and the US Environmental Protection Agency. It is designed around a modular implementation, and provides a standa...
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Engineering modular and orthogonal genetic logic gates for robust digital-like synthetic biology.
Wang, Baojun; Kitney, Richard I; Joly, Nicolas; Buck, Martin
2011-10-18
Modular and orthogonal genetic logic gates are essential for building robust biologically based digital devices to customize cell signalling in synthetic biology. Here we constructed an orthogonal AND gate in Escherichia coli using a novel hetero-regulation module from Pseudomonas syringae. The device comprises two co-activating genes hrpR and hrpS controlled by separate promoter inputs, and a σ(54)-dependent hrpL promoter driving the output. The hrpL promoter is activated only when both genes are expressed, generating digital-like AND integration behaviour. The AND gate is demonstrated to be modular by applying new regulated promoters to the inputs, and connecting the output to a NOT gate module to produce a combinatorial NAND gate. The circuits were assembled using a parts-based engineering approach of quantitative characterization, modelling, followed by construction and testing. The results show that new genetic logic devices can be engineered predictably from novel native orthogonal biological control elements using quantitatively in-context characterized parts. © 2011 Macmillan Publishers Limited. All rights reserved.
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-05-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
An integrated knowledge system for wind tunnel testing - Project Engineers' Intelligent Assistant
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Shi, George Z.; Hoyt, W. A.; Steinle, Frank W., Jr.
1993-01-01
The Project Engineers' Intelligent Assistant (PEIA) is an integrated knowledge system developed using artificial intelligence technology, including hypertext, expert systems, and dynamic user interfaces. This system integrates documents, engineering codes, databases, and knowledge from domain experts into an enriched hypermedia environment and was designed to assist project engineers in planning and conducting wind tunnel tests. PEIA is a modular system which consists of an intelligent user-interface, seven modules and an integrated tool facility. Hypermedia technology is discussed and the seven PEIA modules are described. System maintenance and updating is very easy due to the modular structure and the integrated tool facility provides user access to commercial software shells for documentation, reporting, or database updating. PEIA is expected to provide project engineers with technical information, increase efficiency and productivity, and provide a realistic tool for personnel training.
Joint Common Architecture Demonstration (JCA Demo) Final Report
2016-07-28
approach for implementing open systems [16], formerly known as the Modular Open Systems Approach (MOSA). OSA is a business and technical strategy to... TECHNICAL REPORT RDMR-AD-16-01 JOINT COMMON ARCHITECTURE DEMONSTRATION (JCA DEMO) FINAL REPORT Scott A. Wigginton... Modular Avionics .......................................................................... 5 E. Model-Based Engineering
Designing ECM-mimetic Materials Using Protein Engineering
Cai, Lei; Heilshorn, Sarah C.
2014-01-01
The natural extracellular matrix (ECM), with its multitude of evolved cell-instructive and cell-responsive properties, provides inspiration and guidelines for the design of engineered biomaterials. One strategy to create ECM-mimetic materials is the modular design of protein-based engineered ECM (eECM) scaffolds. This modular design strategy involves combining multiple protein domains with different functionalities into a single, modular polymer sequence, resulting in a multifunctional matrix with independent tunability of the individual domain functions. These eECMs often enable decoupled control over multiple material properties for fundamental studies of cell-matrix interactions. In addition, since the eECMs are frequently composed entirely of bioresorbable amino acids, these matrices have immense clinical potential for a variety of regenerative medicine applications. This brief review demonstrates how fundamental knowledge gained from structure-function studies of native proteins can be exploited in the design of novel protein-engineered biomaterials. While the field of protein-engineered biomaterials has existed for over 20 years, the community is only now beginning to fully explore the diversity of functional peptide modules that can be incorporated into these materials. We have chosen to highlight recent examples that either (1) demonstrate exemplary use as matrices with cell-instructive and cell-responsive properties or (2) demonstrate outstanding creativity in terms of novel molecular-level design and macro-level functionality. PMID:24365704
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Richard A.; Brown, Joseph M.; Colby, Sean M.
ATLAS (Automatic Tool for Local Assembly Structures) is a comprehensive multiomics data analysis pipeline that is massively parallel and scalable. ATLAS contains a modular analysis pipeline for assembly, annotation, quantification and genome binning of metagenomics and metatranscriptomics data and a framework for reference metaproteomic database construction. ATLAS transforms raw sequence data into functional and taxonomic data at the microbial population level and provides genome-centric resolution through genome binning. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS provides robust taxonomy based onmore » majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS is user-friendly, easy install through bioconda maintained as open-source on GitHub, and is implemented in Snakemake for modular customizable workflows.« less
Optimizing high performance computing workflow for protein functional annotation.
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-09-10
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.
Optimizing high performance computing workflow for protein functional annotation
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-01-01
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296
Engineering Protein Hydrogels Using SpyCatcher-SpyTag Chemistry.
Gao, Xiaoye; Fang, Jie; Xue, Bin; Fu, Linglan; Li, Hongbin
2016-09-12
Constructing hydrogels from engineered proteins has attracted significant attention within the material sciences, owing to their myriad potential applications in biomedical engineering. Developing efficient methods to cross-link tailored protein building blocks into hydrogels with desirable mechanical, physical, and functional properties is of paramount importance. By making use of the recently developed SpyCatcher-SpyTag chemistry, we successfully engineered protein hydrogels on the basis of engineered tandem modular elastomeric proteins. Our resultant protein hydrogels are soft but stable, and show excellent biocompatibility. As the first step, we tested the use of these hydrogels as a drug carrier, as well as in encapsulating human lung fibroblast cells. Our results demonstrate the robustness of the SpyCatcher-SpyTag chemistry, even when the SpyTag (or SpyCatcher) is flanked by folded globular domains. These results demonstrate that SpyCatcher-SpyTag chemistry can be used to engineer protein hydrogels from tandem modular elastomeric proteins that can find applications in tissue engineering, in fundamental mechano-biological studies, and as a controlled drug release vehicle.
Recent Technology Advances in Distributed Engine Control
NASA Technical Reports Server (NTRS)
Culley, Dennis
2017-01-01
This presentation provides an overview of the work performed at NASA Glenn Research Center in distributed engine control technology. This is control system hardware technology that overcomes engine system constraints by modularizing control hardware and integrating the components over communication networks.
Reliability studies of Integrated Modular Engine system designs
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1993-01-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of integrated modular engine system designs
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1993-01-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of integrated modular engine system designs
NASA Astrophysics Data System (ADS)
Hardy, Terry L.; Rapp, Douglas C.
1993-06-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
Reliability studies of Integrated Modular Engine system designs
NASA Astrophysics Data System (ADS)
Hardy, Terry L.; Rapp, Douglas C.
1993-06-01
A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.
NASA Astrophysics Data System (ADS)
Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.
Predicted performance of an integrated modular engine system
NASA Technical Reports Server (NTRS)
Binder, Michael; Felder, James L.
1993-01-01
Space vehicle propulsion systems are traditionally comprised of a cluster of discrete engines, each with its own set of turbopumps, valves, and a thrust chamber. The Integrated Modular Engine (IME) concept proposes a vehicle propulsion system comprised of multiple turbopumps, valves, and thrust chambers which are all interconnected. The IME concept has potential advantages in fault-tolerance, weight, and operational efficiency compared with the traditional clustered engine configuration. The purpose of this study is to examine the steady-state performance of an IME system with various components removed to simulate fault conditions. An IME configuration for a hydrogen/oxygen expander cycle propulsion system with four sets of turbopumps and eight thrust chambers has been modeled using the Rocket Engine Transient Simulator (ROCETS) program. The nominal steady-state performance is simulated, as well as turbopump thrust chamber and duct failures. The impact of component failures on system performance is discussed in the context of the system's fault tolerant capabilities.
Highlights of X-Stack ExM Deliverable: MosaStore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ripeanu, Matei
2016-07-20
This brief report highlights the experience gained with MosaStore, an exploratory part of the X-Stack project “ExM: System support for extreme-scale, many-task applications”. The ExM project proposed to use concurrent workflows supported by the Swift language and runtime as an innovative programming model to exploit parallelism in exascale computers. MosaStore aims to support this endeavor by improving storage support for workflow-based applications, more precisely by exploring the gains that can be obtained from co-designing the storage system and the workflow runtime engine. MosaStore has been developed primarily at the University of British Columbia.
Disseminating Metaproteomic Informatics Capabilities and Knowledge Using the Galaxy-P Framework
Easterly, Caleb; Gruening, Bjoern; Johnson, James; Kolmeder, Carolin A.; Kumar, Praveen; May, Damon; Mehta, Subina; Mesuere, Bart; Brown, Zachary; Elias, Joshua E.; Hervey, W. Judson; McGowan, Thomas; Muth, Thilo; Rudney, Joel; Griffin, Timothy J.
2018-01-01
The impact of microbial communities, also known as the microbiome, on human health and the environment is receiving increased attention. Studying translated gene products (proteins) and comparing metaproteomic profiles may elucidate how microbiomes respond to specific environmental stimuli, and interact with host organisms. Characterizing proteins expressed by a complex microbiome and interpreting their functional signature requires sophisticated informatics tools and workflows tailored to metaproteomics. Additionally, there is a need to disseminate these informatics resources to researchers undertaking metaproteomic studies, who could use them to make new and important discoveries in microbiome research. The Galaxy for proteomics platform (Galaxy-P) offers an open source, web-based bioinformatics platform for disseminating metaproteomics software and workflows. Within this platform, we have developed easily-accessible and documented metaproteomic software tools and workflows aimed at training researchers in their operation and disseminating the tools for more widespread use. The modular workflows encompass the core requirements of metaproteomic informatics: (a) database generation; (b) peptide spectral matching; (c) taxonomic analysis and (d) functional analysis. Much of the software available via the Galaxy-P platform was selected, packaged and deployed through an online metaproteomics “Contribution Fest“ undertaken by a unique consortium of expert software developers and users from the metaproteomics research community, who have co-authored this manuscript. These resources are documented on GitHub and freely available through the Galaxy Toolshed, as well as a publicly accessible metaproteomics gateway Galaxy instance. These documented workflows are well suited for the training of novice metaproteomics researchers, through online resources such as the Galaxy Training Network, as well as hands-on training workshops. Here, we describe the metaproteomics tools available within these Galaxy-based resources, as well as the process by which they were selected and implemented in our community-based work. We hope this description will increase access to and utilization of metaproteomics tools, as well as offer a framework for continued community-based development and dissemination of cutting edge metaproteomics software. PMID:29385081
Disseminating Metaproteomic Informatics Capabilities and Knowledge Using the Galaxy-P Framework.
Blank, Clemens; Easterly, Caleb; Gruening, Bjoern; Johnson, James; Kolmeder, Carolin A; Kumar, Praveen; May, Damon; Mehta, Subina; Mesuere, Bart; Brown, Zachary; Elias, Joshua E; Hervey, W Judson; McGowan, Thomas; Muth, Thilo; Nunn, Brook; Rudney, Joel; Tanca, Alessandro; Griffin, Timothy J; Jagtap, Pratik D
2018-01-31
The impact of microbial communities, also known as the microbiome, on human health and the environment is receiving increased attention. Studying translated gene products (proteins) and comparing metaproteomic profiles may elucidate how microbiomes respond to specific environmental stimuli, and interact with host organisms. Characterizing proteins expressed by a complex microbiome and interpreting their functional signature requires sophisticated informatics tools and workflows tailored to metaproteomics. Additionally, there is a need to disseminate these informatics resources to researchers undertaking metaproteomic studies, who could use them to make new and important discoveries in microbiome research. The Galaxy for proteomics platform (Galaxy-P) offers an open source, web-based bioinformatics platform for disseminating metaproteomics software and workflows. Within this platform, we have developed easily-accessible and documented metaproteomic software tools and workflows aimed at training researchers in their operation and disseminating the tools for more widespread use. The modular workflows encompass the core requirements of metaproteomic informatics: (a) database generation; (b) peptide spectral matching; (c) taxonomic analysis and (d) functional analysis. Much of the software available via the Galaxy-P platform was selected, packaged and deployed through an online metaproteomics "Contribution Fest" undertaken by a unique consortium of expert software developers and users from the metaproteomics research community, who have co-authored this manuscript. These resources are documented on GitHub and freely available through the Galaxy Toolshed, as well as a publicly accessible metaproteomics gateway Galaxy instance. These documented workflows are well suited for the training of novice metaproteomics researchers, through online resources such as the Galaxy Training Network, as well as hands-on training workshops. Here, we describe the metaproteomics tools available within these Galaxy-based resources, as well as the process by which they were selected and implemented in our community-based work. We hope this description will increase access to and utilization of metaproteomics tools, as well as offer a framework for continued community-based development and dissemination of cutting edge metaproteomics software.
NASA Astrophysics Data System (ADS)
Vedeneev, V. V.; Kolotnikov, M. E.; Mossakovskii, P. A.; Kostyreva, L. A.; Abdukhakimov, F. A.; Makarov, P. V.; Pyhalov, A. A.; Dudaev, M. A.
2018-01-01
In this paper we present a complex numerical workflow for analysis of blade flutter and high-amplitude resonant oscillations, impenetrability of casing if the blade is broken off, and the rotor reaction to the blade detachment and following misbalance, with the assessment of a safe flight possibility at the auto-rotation regime. All the methods used are carefully verified by numerical convergence study and correlations with experiments. The use of the workflow developed significantly improves the efficiency of the design process of modern jet engine compressors. It ensures a significant reduction of time and cost of the compressor design with the required level of strength and durability.
Space station MSFC-DPD-235/DR no. CM-03 specification, modular space station project, Part 1 CEI
NASA Technical Reports Server (NTRS)
1971-01-01
Contract engineering item specifications for the modular space station are presented. These specifications resulted from the development and allocations of requirements which are concise statements of performance or constraints on performance. Specifications contain requirements for functional performance and for the verification of design solutions.
A Modular Approach for Teaching Partial Discharge Phenomenon through Experiment
ERIC Educational Resources Information Center
Chatterjee, B.; Dey, D.; Chakravorti, S.
2011-01-01
Partial discharge (PD) monitoring is an effective predictive maintenance tool for electrical power equipment. As a result, an understanding of the theory related to PD and the associated measurement techniques is now necessary knowledge for power engineers in their professional life. This paper presents a modular course on PD phenomenon in which…
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows (Invited)
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-12-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues’ expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable “software appliance” to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish “talkoot” (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a “science story” in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using “service casts” and “interest casts” (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH’s Mining Workflow Composer and the open-source Active BPEL engine, and JPL’s SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the “sociological” problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
Cornwell, MacIntosh; Vangala, Mahesh; Taing, Len; Herbert, Zachary; Köster, Johannes; Li, Bo; Sun, Hanfei; Li, Taiwen; Zhang, Jian; Qiu, Xintao; Pun, Matthew; Jeselsohn, Rinath; Brown, Myles; Liu, X Shirley; Long, Henry W
2018-04-12
RNA sequencing has become a ubiquitous technology used throughout life sciences as an effective method of measuring RNA abundance quantitatively in tissues and cells. The increase in use of RNA-seq technology has led to the continuous development of new tools for every step of analysis from alignment to downstream pathway analysis. However, effectively using these analysis tools in a scalable and reproducible way can be challenging, especially for non-experts. Using the workflow management system Snakemake we have developed a user friendly, fast, efficient, and comprehensive pipeline for RNA-seq analysis. VIPER (Visualization Pipeline for RNA-seq analysis) is an analysis workflow that combines some of the most popular tools to take RNA-seq analysis from raw sequencing data, through alignment and quality control, into downstream differential expression and pathway analysis. VIPER has been created in a modular fashion to allow for the rapid incorporation of new tools to expand the capabilities. This capacity has already been exploited to include very recently developed tools that explore immune infiltrate and T-cell CDR (Complementarity-Determining Regions) reconstruction abilities. The pipeline has been conveniently packaged such that minimal computational skills are required to download and install the dozens of software packages that VIPER uses. VIPER is a comprehensive solution that performs most standard RNA-seq analyses quickly and effectively with a built-in capacity for customization and expansion.
PCL-HA microscaffolds for in vitro modular bone tissue engineering.
Totaro, Alessandra; Salerno, Aurelio; Imparato, Giorgia; Domingo, Concepción; Urciuolo, Francesco; Netti, Paolo Antonio
2017-06-01
The evolution of microscaffolds and bone-bioactive surfaces is a pivotal point in modular bone tissue engineering. In this study, the design and fabrication of porous polycaprolactone (PCL) microscaffolds functionalized with hydroxyapatite (HA) nanoparticles by means of a bio-safe and versatile thermally-induced phase separation process is reported. The ability of the as-prepared nanocomposite microscaffolds to support the adhesion, growth and osteogenic differentiation of human mesenchymal stem cells (hMSCs) in standard and osteogenic media and using dynamic seeding/culture conditions was investigated. The obtained results demonstrated that the PCL-HA nanocomposite microparticles had an enhanced interaction with hMSCs and induced their osteogenic differentiation, even without the exogenous addition of osteogenic factors. In particular, calcium deposition, alizarin red assay, histological analysis, osteogenic gene expression and collagen I secretion were assessed. The results of these tests demonstrated the formation of bone microtissue precursors after 28 days of dynamic culture. These findings suggest that PCL-HA nanocomposite microparticles represent an excellent platform for in vitro modular bone tissue engineering. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Modular Engine Noise Component Prediction System (MCP) Program Users' Guide
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Herkes, William H.; Reed, David H.
2004-01-01
This is a user's manual for Modular Engine Noise Component Prediction System (MCP). This computer code allows the user to predict turbofan engine noise estimates. The program is based on an empirical procedure that has evolved over many years at The Boeing Company. The data used to develop the procedure include both full-scale engine data and small-scale model data, and include testing done by Boeing, by the engine manufacturers, and by NASA. In order to generate a noise estimate, the user specifies the appropriate engine properties (including both geometry and performance parameters), the microphone locations, the atmospheric conditions, and certain data processing options. The version of the program described here allows the user to predict three components: inlet-radiated fan noise, aft-radiated fan noise, and jet noise. MCP predicts one-third octave band noise levels over the frequency range of 50 to 10,000 Hertz. It also calculates overall sound pressure levels and certain subjective noise metrics (e.g., perceived noise levels).
A novel spectral library workflow to enhance protein identifications.
Li, Haomin; Zong, Nobel C; Liang, Xiangbo; Kim, Allen K; Choi, Jeong Ho; Deng, Ning; Zelaya, Ivette; Lam, Maggie; Duan, Huilong; Ping, Peipei
2013-04-09
The innovations in mass spectrometry-based investigations in proteome biology enable systematic characterization of molecular details in pathophysiological phenotypes. However, the process of delineating large-scale raw proteomic datasets into a biological context requires high-throughput data acquisition and processing. A spectral library search engine makes use of previously annotated experimental spectra as references for subsequent spectral analyses. This workflow delivers many advantages, including elevated analytical efficiency and specificity as well as reduced demands in computational capacity. In this study, we created a spectral matching engine to address challenges commonly associated with a library search workflow. Particularly, an improved sliding dot product algorithm, that is robust to systematic drifts of mass measurement in spectra, is introduced. Furthermore, a noise management protocol distinguishes spectra correlation attributed from noise and peptide fragments. It enables elevated separation between target spectral matches and false matches, thereby suppressing the possibility of propagating inaccurate peptide annotations from library spectra to query spectra. Moreover, preservation of original spectra also accommodates user contributions to further enhance the quality of the library. Collectively, this search engine supports reproducible data analyses using curated references, thereby broadening the accessibility of proteomics resources to biomedical investigators. This article is part of a Special Issue entitled: From protein structures to clinical applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Specification and design of a Therapy Imaging and Model Management System (TIMMS)
NASA Astrophysics Data System (ADS)
Lemke, Heinz U.; Berliner, Leonard
2007-03-01
Appropriate use of Information and Communication Technology (ICT) and Mechatronic (MT) systems is considered by many experts as a significant contribution to improve workflow and quality of care in the Operating Room (OR). This will require a suitable IT infrastructure as well as communication and interface standards, such as DICOM and suitable extensions, to allow data interchange between surgical system components in the OR. A conceptual design of such an infrastructure, i.e. a Therapy Imaging and Model Management System (TIMMS) will be introduced in this paper. A TIMMS should support the essential functions that enable and advance image, and in particular, patient model guided therapy. Within this concept, the image centric world view of the classical PACS technology is complemented by an IT model-centric world view. Such a view is founded in the special modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed. A proper design of a TIMMS, taking into account modern software engineering principles, such as service oriented architecture, will clarify the right position of interfaces and relevant standards for a Surgical Assist System (SAS) in general and their components specifically. Such a system needs to be designed to provide a highly modular structure. Modules may be defined on different granulation levels. A first list of components (e.g. high and low level modules) comprising engines and repositories of an SAS, which should be integrated by a TIMMS, will be introduced in this paper.
Maserat, Elham; Seied Farajollah, Seiede Sedigheh; Safdari, Reza; Ghazisaeedi, Marjan; Aghdaei, Hamid Asadzadeh; Zali, Mohammad Reza
2015-01-01
Colorectal cancer is a major cause of morbidity and mortality throughout the world. Colorectal cancer screening is an optimal way for reducing of morbidity and mortality and a clinical decision support system (CDSS) plays an important role in predicting success of screening processes. DSS is a computer-based information system that improves the delivery of preventive care services. The aim of this article was to detail engineering of information requirements and work flow design of CDSS for a colorectal cancer screening program. In the first stage a screening minimum data set was determined. Developed and developing countries were analyzed for identifying this data set. Then information deficiencies and gaps were determined by check list. The second stage was a qualitative survey with a semi-structured interview as the study tool. A total of 15 users and stakeholders' perspectives about workflow of CDSS were studied. Finally workflow of DSS of control program was designed by standard clinical practice guidelines and perspectives. Screening minimum data set of national colorectal cancer screening program was defined in five sections, including colonoscopy data set, surgery, pathology, genetics and pedigree data set. Deficiencies and information gaps were analyzed. Then we designed a work process standard of screening. Finally workflow of DSS and entry stage were determined. A CDSS facilitates complex decision making for screening and has key roles in designing optimal interactions between colonoscopy, pathology and laboratory departments. Also workflow analysis is useful to identify data reconciliation strategies to address documentation gaps. Following recommendations of CDSS should improve quality of colorectal cancer screening.
Clearing the skies over modular polyketide synthases.
Sherman, David H; Smith, Janet L
2006-09-19
Modular polyketide synthases (PKSs) are large multifunctional proteins that synthesize complex polyketide metabolites in microbial cells. A series of recent studies confirm the close protein structural relationship between catalytic domains in the type I mammalian fatty acid synthase (FAS) and the basic synthase unit of the modular PKS. They also establish a remarkable similarity in the overall organization of the type I FAS and the PKS module. This information provides important new conclusions about catalytic domain architecture, function, and molecular recognition that are essential for future efforts to engineer useful polyketide metabolites with valuable biological activities.
Provenance-Powered Automatic Workflow Generation and Composition
NASA Astrophysics Data System (ADS)
Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.
2015-12-01
In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.
Parametric Workflow (BIM) for the Repair Construction of Traditional Historic Architecture in Taiwan
NASA Astrophysics Data System (ADS)
Ma, Y.-P.; Hsu, C. C.; Lin, M.-C.; Tsai, Z.-W.; Chen, J.-Y.
2015-08-01
In Taiwan, numerous existing traditional buildings are constructed with wooden structures, brick structures, and stone structures. This paper will focus on the Taiwan traditional historic architecture and target the traditional wooden structure buildings as the design proposition and process the BIM workflow for modeling complex wooden combination geometry, integrating with more traditional 2D documents and for visualizing repair construction assumptions within the 3D model representation. The goal of this article is to explore the current problems to overcome in wooden historic building conservation, and introduce the BIM technology in the case of conserving, documenting, managing, and creating full engineering drawings and information for effectively support historic conservation. Although BIM is mostly oriented to current construction praxis, there have been some attempts to investigate its applicability in historic conservation projects. This article also illustrates the importance and advantages of using BIM workflow in repair construction process, when comparing with generic workflow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
GUEST EDITOR'S INTRODUCTION: Guest Editor's introduction
NASA Astrophysics Data System (ADS)
Chrysanthis, Panos K.
1996-12-01
Computer Science Department, University of Pittsburgh, Pittsburgh, PA 15260, USA This special issue focuses on current efforts to represent and support workflows that integrate information systems and human resources within a business or manufacturing enterprise. Workflows may also be viewed as an emerging computational paradigm for effective structuring of cooperative applications involving human users and access to diverse data types not necessarily maintained by traditional database management systems. A workflow is an automated organizational process (also called business process) which consists of a set of activities or tasks that need to be executed in a particular controlled order over a combination of heterogeneous database systems and legacy systems. Within workflows, tasks are performed cooperatively by either human or computational agents in accordance with their roles in the organizational hierarchy. The challenge in facilitating the implementation of workflows lies in developing efficient workflow management systems. A workflow management system (also called workflow server, workflow engine or workflow enactment system) provides the necessary interfaces for coordination and communication among human and computational agents to execute the tasks involved in a workflow and controls the execution orderings of tasks as well as the flow of data that these tasks manipulate. That is, the workflow management system is responsible for correctly and reliably supporting the specification, execution, and monitoring of workflows. The six papers selected (out of the twenty-seven submitted for this special issue of Distributed Systems Engineering) address different aspects of these three functional components of a workflow management system. In the first paper, `Correctness issues in workflow management', Kamath and Ramamritham discuss the important issue of correctness in workflow management that constitutes a prerequisite for the use of workflows in the automation of the critical organizational/business processes. In particular, this paper examines the issues of execution atomicity and failure atomicity, differentiating between correctness requirements of system failures and logical failures, and surveys techniques that can be used to ensure data consistency in workflow management systems. While the first paper is concerned with correctness assuming transactional workflows in which selective transactional properties are associated with individual tasks or the entire workflow, the second paper, `Scheduling workflows by enforcing intertask dependencies' by Attie et al, assumes that the tasks can be either transactions or other activities involving legacy systems. This second paper describes the modelling and specification of conditions involving events and dependencies among tasks within a workflow using temporal logic and finite state automata. It also presents a scheduling algorithm that enforces all stated dependencies by executing at any given time only those events that are allowed by all the dependency automata and in an order as specified by the dependencies. In any system with decentralized control, there is a need to effectively cope with the tension that exists between autonomy and consistency requirements. In `A three-level atomicity model for decentralized workflow management systems', Ben-Shaul and Heineman focus on the specific requirement of enforcing failure atomicity in decentralized, autonomous and interacting workflow management systems. Their paper describes a model in which each workflow manager must be able to specify the sequence of tasks that comprise an atomic unit for the purposes of correctness, and the degrees of local and global atomicity for the purpose of cooperation with other workflow managers. The paper also discusses a realization of this model in which treaties and summits provide an agreement mechanism, while underlying transaction managers are responsible for maintaining failure atomicity. The fourth and fifth papers are experience papers describing a workflow management system and a large scale workflow application, respectively. Schill and Mittasch, in `Workflow management systems on top of OSF DCE and OMG CORBA', describe a decentralized workflow management system and discuss its implementation using two standardized middleware platforms, namely, OSF DCE and OMG CORBA. The system supports a new approach to workflow management, introducing several new concepts such as data type management for integrating various types of data and quality of service for various services provided by servers. A problem common to both database applications and workflows is the handling of missing and incomplete information. This is particularly pervasive in an `electronic market' with a huge number of retail outlets producing and exchanging volumes of data, the application discussed in `Information flow in the DAMA project beyond database managers: information flow managers'. Motivated by the need for a method that allows a task to proceed in a timely manner if not all data produced by other tasks are available by its deadline, Russell et al propose an architectural framework and a language that can be used to detect, approximate and, later on, to adjust missing data if necessary. The final paper, `The evolution towards flexible workflow systems' by Nutt, is complementary to the other papers and is a survey of issues and of work related to both workflow and computer supported collaborative work (CSCW) areas. In particular, the paper provides a model and a categorization of the dimensions which workflow management and CSCW systems share. Besides summarizing the recent advancements towards efficient workflow management, the papers in this special issue suggest areas open to investigation and it is our hope that they will also provide the stimulus for further research and development in the area of workflow management systems.
Introduction of Sustainability Concepts into Industrial Engineering Education: A Modular Approach
ERIC Educational Resources Information Center
Nazzal, Dima; Zabinski, Joseph; Hugar, Alexander; Reinhart, Debra; Karwowski, Waldemar; Madani, Kaveh
2015-01-01
Sustainability in operations, production, and consumption continues to gain relevance for engineers. This trend will accelerate as demand for goods and services grows, straining resources and requiring ingenuity to replace boundless supply in meeting the needs of a more crowded, more prosperous world. Industrial engineers are uniquely positioned…
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
Tackling the x-ray cargo inspection challenge using machine learning
NASA Astrophysics Data System (ADS)
Jaccard, Nicolas; Rogers, Thomas W.; Morton, Edward J.; Griffin, Lewis D.
2016-05-01
The current infrastructure for non-intrusive inspection of cargo containers cannot accommodate exploding com-merce volumes and increasingly stringent regulations. There is a pressing need to develop methods to automate parts of the inspection workflow, enabling expert operators to focus on a manageable number of high-risk images. To tackle this challenge, we developed a modular framework for automated X-ray cargo image inspection. Employing state-of-the-art machine learning approaches, including deep learning, we demonstrate high performance for empty container verification and specific threat detection. This work constitutes a significant step towards the partial automation of X-ray cargo image inspection.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
Integrated Sensitivity Analysis Workflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.
2014-08-01
Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.
Modular electron transfer circuits for synthetic biology
Agapakis, Christina M
2010-01-01
Electron transfer is central to a wide range of essential metabolic pathways, from photosynthesis to fermentation. The evolutionary diversity and conservation of proteins that transfer electrons makes these pathways a valuable platform for engineered metabolic circuits in synthetic biology. Rational engineering of electron transfer pathways containing hydrogenases has the potential to lead to industrial scale production of hydrogen as an alternative source of clean fuel and experimental assays for understanding the complex interactions of multiple electron transfer proteins in vivo. We designed and implemented a synthetic hydrogen metabolism circuit in Escherichia coli that creates an electron transfer pathway both orthogonal to and integrated within existing metabolism. The design of such modular electron transfer circuits allows for facile characterization of in vivo system parameters with applications toward further engineering for alternative energy production. PMID:21468209
An Introduction to the Fundamentals of Chemistry for the Marine Engineer.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This document describes an introduction course in the fundamentals of chemistry for marine engineers. The course is modularized, audio tutorial allowing the student to progress at his own rate while integrating laboratory and lecture materials. (SL)
Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan
2016-01-01
Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modular and affordable time-lapse imaging and incubation system (ATLIS). The ATLIS enables the transformation of simple inverted microscopes into live cell imaging systems using custom-designed 3D-printed parts, a smartphone, and off-the-shelf electronic components. We demonstrate that the ATLIS provides stable environmental conditions to support normal cell behavior during live imaging experiments in both traditional and evaporation-sensitive microfluidic cell culture systems. Thus, the system presented here has the potential to increase the accessibility of time-lapse microscopy of living cells for the wider research community. PMID:28002463
Hernández Vera, Rodrigo; Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan
2016-01-01
Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modular and affordable time-lapse imaging and incubation system (ATLIS). The ATLIS enables the transformation of simple inverted microscopes into live cell imaging systems using custom-designed 3D-printed parts, a smartphone, and off-the-shelf electronic components. We demonstrate that the ATLIS provides stable environmental conditions to support normal cell behavior during live imaging experiments in both traditional and evaporation-sensitive microfluidic cell culture systems. Thus, the system presented here has the potential to increase the accessibility of time-lapse microscopy of living cells for the wider research community.
NASA Astrophysics Data System (ADS)
L'Heureux, Zara E.
This thesis proposes that internal combustion piston engines can help clear the way for a transformation in the energy, chemical, and refining industries that is akin to the transition computer technology experienced with the shift from large mainframes to small personal computers and large farms of individually small, modular processing units. This thesis provides a mathematical foundation, multi-dimensional optimizations, experimental results, an engine model, and a techno-economic assessment, all working towards quantifying the value of repurposing internal combustion piston engines for new applications in modular, small-scale technologies, particularly for energy and chemical engineering systems. Many chemical engineering and power generation industries have focused on increasing individual unit sizes and centralizing production. This "bigger is better" concept makes it difficult to evolve and incorporate change. Large systems are often designed with long lifetimes, incorporate innovation slowly, and necessitate high upfront investment costs. Breaking away from this cycle is essential for promoting change, especially change happening quickly in the energy and chemical engineering industries. The ability to evolve during a system's lifetime provides a competitive advantage in a field dominated by large and often very old equipment that cannot respond to technology change. This thesis specifically highlights the value of small, mass-manufactured internal combustion piston engines retrofitted to participate in non-automotive system designs. The applications are unconventional and stem first from the observation that, when normalized by power output, internal combustion engines are one hundred times less expensive than conventional, large power plants. This cost disparity motivated a look at scaling laws to determine if scaling across both individual unit size and number of units produced would predict the two order of magnitude difference seen here. For the first time, this thesis provides a mathematical analysis of scaling with a combination of both changing individual unit size and varying the total number of units produced. Different paths to meet a particular cumulative capacity are analyzed and show that total costs are path dependent and vary as a function of the unit size and number of units produced. The path dependence identified is fairly weak, however, and for all practical applications, the underlying scaling laws seem unaffected. This analysis continues to support the interest in pursuing designs built around small, modular infrastructure. Building on the observation that internal combustion engines are an inexpensive power-producing unit, the first optimization in this thesis focuses on quantifying the value of engine capacity committing to deliver power in the day-ahead electricity and reserve markets, specifically based on pricing from the New York Independent System Operator (NYISO). An optimization was written in Python to determine, based on engine cost, fuel cost, engine wear, engine lifetime, and electricity prices, when and how much of an engine's power should be committed to a particular energy market. The optimization aimed to maximize profit for the engine and generator (engine genset) system acting as a price-taker. The result is an annual profit on the order of \\$30 per kilowatt. The most value in the engine genset is in its commitments to the spinning reserve market, where power is often committed but not always called on to deliver. This analysis highlights the benefits of modularity in energy generation and provides one example where the system is so inexpensive and short-lived, that the optimization views the engine replacement cost as a consumable operating expense rather than a capital cost. Having the opportunity to incorporate incremental technological improvements in a system's infrastructure throughout its lifetime allows introduction of new technology with higher efficiencies and better designs. An alternative to traditionally large infrastructure that locks in a design and today's state-of-the-art technology for the next 50 - 70 years, is a system designed to incorporate new technology in a modular fashion. The modular engine genset system used for power generation is one example of how this works in practice. The largest single component of this thesis is modeling, designing, retrofitting, and testing a reciprocating piston engine used as a compressor. Motivated again by the low cost of an internal combustion engine, this work looks at how an engine (which is, in its conventional form, essentially a reciprocating compressor) can be cost-effectively retrofitted to perform as a small-scale gas compressor. In the laboratory, an engine compressor was built by retrofitting a one-cylinder, 79 cc engine. Various retrofitting techniques were incorporated into the system design, and the engine compressor performance was quantified in each iteration. Because the retrofitted engine is now a power consumer rather than a power-producing unit, the engine compressor is driven in the laboratory with an electric motor. Experimentally, compressed air engine exhaust (starting at elevated inlet pressures) surpassed 650 psia (about 45 bar), which makes this system very attractive for many applications in chemical engineering and refining industries. A model of the engine compressor system was written in Python and incorporates experimentally-derived parameters to quantify gas leakage, engine friction, and flow (including backflow) through valves. The model as a whole was calibrated and verified with experimental data and is used to explore engine retrofits beyond what was tested in the laboratory. Along with the experimental and modeling work, a techno-economic assessment is included to compare the engine compressor system with state-of-the-art, commercially-available compressors. Included in the financial analysis is a case study where an engine compressor system is modeled to achieve specific compression needs. The result of the assessment is that, indeed, the low engine cost, even with the necessary retrofits, provides a cost advantage over incumbent compression technologies. Lastly, this thesis provides an algorithm and case study for another application of small-scale units in energy infrastructure, specifically in energy storage. This study focuses on quantifying the value of small-scale, onsite energy storage in shaving peak power demands. This case study focuses on university-level power demands. The analysis finds that, because peak power is so costly, even small amounts of energy storage, when dispatched optimally, can provide significant cost reductions. This provides another example of the value of small-scale implementations, particularly in energy infrastructure. While the study focuses on flywheels and batteries as the energy storage medium, engine gensets could also be used to deliver power and shave peak power demands. The overarching goal of this thesis is to introduce small-scale, modular infrastructure, with a particular focus on the opportunity to retrofit and repurpose inexpensive, mass-manufactured internal combustion engines in new and unconventional applications. The modeling and experimental work presented in this dissertation show very compelling results for engines incorporated into both energy generation infrastructure and chemical engineering industries via compression technologies. The low engine cost provides an opportunity to add retrofits whilst remaining cost competitive with the incumbent technology. This work supports the claim that modular infrastructure, built on the indivisible unit of an internal combustion engine, can revolutionize many industries by providing a low-cost mechanism for rapid change and promoting small-scale designs.
Self-organized modularization in evolutionary algorithms.
Dauscher, Peter; Uthmann, Thomas
2005-01-01
The principle of modularization has proven to be extremely successful in the field of technical applications and particularly for Software Engineering purposes. The question to be answered within the present article is whether mechanisms can also be identified within the framework of Evolutionary Computation that cause a modularization of solutions. We will concentrate on processes, where modularization results only from the typical evolutionary operators, i.e. selection and variation by recombination and mutation (and not, e.g., from special modularization operators). This is what we call Self-Organized Modularization. Based on a combination of two formalizations by Radcliffe and Altenberg, some quantitative measures of modularity are introduced. Particularly, we distinguish Built-in Modularity as an inherent property of a genotype and Effective Modularity, which depends on the rest of the population. These measures can easily be applied to a wide range of present Evolutionary Computation models. It will be shown, both theoretically and by simulation, that under certain conditions, Effective Modularity (as defined within this paper) can be a selection factor. This causes Self-Organized Modularization to take place. The experimental observations emphasize the importance of Effective Modularity in comparison with Built-in Modularity. Although the experimental results have been obtained using a minimalist toy model, they can lead to a number of consequences for existing models as well as for future approaches. Furthermore, the results suggest a complex self-amplification of highly modular equivalence classes in the case of respected relations. Since the well-known Holland schemata are just the equivalence classes of respected relations in most Simple Genetic Algorithms, this observation emphasizes the role of schemata as Building Blocks (in comparison with arbitrary subsets of the search space).
Design and Implementation of Multi-Campus, Modular Master Classes in Biochemical Engineering
ERIC Educational Resources Information Center
Wuyts, Niek; Bruneel, Dorine; Meyers, Myriam; Van Hoof, Etienne; De Vos, Leander; Langie, Greet; Rediers, Hans
2015-01-01
The Master of Science in engineering technology: biochemical engineering is organised in KU Leuven at four geographically dispersed campuses. To sustain the Master's programmes at all campuses, it is clear that a unique education profile at each campus is crucial. In addition, a rationalisation is required by increased cooperation, increased…
Machine learning for fab automated diagnostics
NASA Astrophysics Data System (ADS)
Giollo, Manuel; Lam, Auguste; Gkorou, Dimitra; Liu, Xing Lan; van Haren, Richard
2017-06-01
Process optimization depends largely on field engineer's knowledge and expertise. However, this practice turns out to be less sustainable due to the fab complexity which is continuously increasing in order to support the extreme miniaturization of Integrated Circuits. On the one hand, process optimization and root cause analysis of tools is necessary for a smooth fab operation. On the other hand, the growth in number of wafer processing steps is adding a considerable new source of noise which may have a significant impact at the nanometer scale. This paper explores the ability of historical process data and Machine Learning to support field engineers in production analysis and monitoring. We implement an automated workflow in order to analyze a large volume of information, and build a predictive model of overlay variation. The proposed workflow addresses significant problems that are typical in fab production, like missing measurements, small number of samples, confounding effects due to heterogeneity of data, and subpopulation effects. We evaluate the proposed workflow on a real usecase and we show that it is able to predict overlay excursions observed in Integrated Circuits manufacturing. The chosen design focuses on linear and interpretable models of the wafer history, which highlight the process steps that are causing defective products. This is a fundamental feature for diagnostics, as it supports process engineers in the continuous improvement of the production line.
Modular disposable can (MODCAN) crash cushion: A concept investigation
NASA Technical Reports Server (NTRS)
Knoell, A.; Wilson, A.
1976-01-01
A conceptual design investigation of an improved highway crash cushion system is presented. The system is referred to as a modular disposable can (MODCAN) crash system. It is composed of a modular arrangement of disposable metal beverage cans configured to serve as an effective highway impact attenuation system. Experimental data, design considerations, and engineering calculations supporting the design development are presented. Design performance is compared to that of a conventional steel drum system. It is shown that the MODCAN concepts offers the potential for smoother and safer occupant deceleration for a larger class of vehicle impact weights than the steel drum device.
Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics
NASA Technical Reports Server (NTRS)
Kratz, Jonathan L.; Culley, Dennis E.; Chapman, Jeffryes W.
2017-01-01
The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.
Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics
NASA Technical Reports Server (NTRS)
Kratz, Jonathan; Culley, Dennis; Chapman, Jeffryes
2016-01-01
The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.
Integration of implant planning workflows into the PACS infrastructure
NASA Astrophysics Data System (ADS)
Gessat, Michael; Strauß, Gero; Burgert, Oliver
2008-03-01
The integration of imaging devices, diagnostic workstations, and image servers into Picture Archiving and Communication Systems (PACS) has had an enormous effect on the efficiency of radiology workflows. The standardization of the information exchange between the devices with the DICOM standard has been an essential precondition for that development. For surgical procedures, no such infrastructure exists. With the increasingly important role computerized planning and assistance systems play in the surgical domain, an infrastructure that unifies the communication between devices becomes necessary. In recent publications, the need for a modularized system design has been established. A reference architecture for a Therapy Imaging and Model Management System (TIMMS) has been proposed. It was accepted by the DICOM Working Group 6 as the reference architecture for DICOM developments for surgery. In this paper we propose the inclusion of implant planning systems into the PACS infrastructure. We propose a generic information model for the patient specific selection and positioning of implants from a repository according to patient image data. The information models are based on clinical workflows from ENT, cardiac, and orthopedic surgery as well as technical requirements derived from different use cases and systems. We show an exemplary implementation of the model for application in ENT surgery: the selection and positioning of an ossicular implant in the middle ear. An implant repository is stored in the PACS. It makes use of an experimental implementation of the Surface Mesh Module that is currently being developed as extension to the DICOM standard.
Design control for clinical translation of 3D printed modular scaffolds.
Hollister, Scott J; Flanagan, Colleen L; Zopf, David A; Morrison, Robert J; Nasser, Hassan; Patel, Janki J; Ebramzadeh, Edward; Sangiorgio, Sophia N; Wheeler, Matthew B; Green, Glenn E
2015-03-01
The primary thrust of tissue engineering is the clinical translation of scaffolds and/or biologics to reconstruct tissue defects. Despite this thrust, clinical translation of tissue engineering therapies from academic research has been minimal in the 27 year history of tissue engineering. Academic research by its nature focuses on, and rewards, initial discovery of new phenomena and technologies in the basic research model, with a view towards generality. Translation, however, by its nature must be directed at specific clinical targets, also denoted as indications, with associated regulatory requirements. These regulatory requirements, especially design control, require that the clinical indication be precisely defined a priori, unlike most academic basic tissue engineering research where the research target is typically open-ended, and furthermore requires that the tissue engineering therapy be constructed according to design inputs that ensure it treats or mitigates the clinical indication. Finally, regulatory approval dictates that the constructed system be verified, i.e., proven that it meets the design inputs, and validated, i.e., that by meeting the design inputs the therapy will address the clinical indication. Satisfying design control requires (1) a system of integrated technologies (scaffolds, materials, biologics), ideally based on a fundamental platform, as compared to focus on a single technology, (2) testing of design hypotheses to validate system performance as opposed to mechanistic hypotheses of natural phenomena, and (3) sequential testing using in vitro, in vivo, large preclinical and eventually clinical tests against competing therapies, as compared to single experiments to test new technologies or test mechanistic hypotheses. Our goal in this paper is to illustrate how design control may be implemented in academic translation of scaffold based tissue engineering therapies. Specifically, we propose to (1) demonstrate a modular platform approach founded on 3D printing for developing tissue engineering therapies and (2) illustrate the design control process for modular implementation of two scaffold based tissue engineering therapies: airway reconstruction and bone tissue engineering based spine fusion.
Design Control for Clinical Translation of 3D Printed Modular Scaffolds
Hollister, Scott J.; Flanagan, Colleen L.; Zopf, David A.; Morrison, Robert J.; Nasser, Hassan; Patel, Janki J.; Ebramzadeh, Edward; Sangiorgio, Sophia N.; Wheeler, Matthew B.; Green, Glenn E.
2015-01-01
The primary thrust of tissue engineering is the clinical translation of scaffolds and/or biologics to reconstruct tissue defects. Despite this thrust, clinical translation of tissue engineering therapies from academic research has been minimal in the 27 year history of tissue engineering. Academic research by its nature focuses on, and rewards, initial discovery of new phenomena and technologies in the basic research model, with a view towards generality. Translation, however, by its nature must be directed at specific clinical targets, also denoted as indications, with associated regulatory requirements. These regulatory requirements, especially design control, require that the clinical indication be precisely defined a priori, unlike most academic basic tissue engineering research where the research target is typically open-ended, and furthermore requires that the tissue engineering therapy be constructed according to design inputs that ensure it treats or mitigates the clinical indication. Finally, regulatory approval dictates that the constructed system be verified, i.e., proven that it meets the design inputs, and validated, i.e., that by meeting the design inputs the therapy will address the clinical indication. Satisfying design control requires (1) a system of integrated technologies (scaffolds, materials, biologics), ideally based on a fundamental platform, as compared to focus on a single technology, (2) testing of design hypotheses to validate system performance as opposed to mechanistic hypotheses of natural phenomena, and (3) sequential testing using in vitro, in vivo, large preclinical and eventually clinical tests against competing therapies, as compared to single experiments to test new technologies or test mechanistic hypotheses. Our goal in this paper is to illustrate how design control may be implemented in academic translation of scaffold based tissue engineering therapies. Specifically, we propose to (1) demonstrate a modular platform approach founded on 3D printing for developing tissue engineering therapies and (2) illustrate the design control process for modular implementation of two scaffold based tissue engineering therapies: airway reconstruction and bone tissue engineering based spine fusion. PMID:25666115
User's Guide for the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS)
NASA Technical Reports Server (NTRS)
Frederick, Dean K.; DeCastro, Jonathan A.; Litt, Jonathan S.
2007-01-01
This report is a Users Guide for the NASA-developed Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) software, which is a transient simulation of a large commercial turbofan engine (up to 90,000-lb thrust) with a realistic engine control system. The software supports easy access to health, control, and engine parameters through a graphical user interface (GUI). C-MAPSS provides the user with a graphical turbofan engine simulation environment in which advanced algorithms can be implemented and tested. C-MAPSS can run user-specified transient simulations, and it can generate state-space linear models of the nonlinear engine model at an operating point. The code has a number of GUI screens that allow point-and-click operation, and have editable fields for user-specified input. The software includes an atmospheric model which allows simulation of engine operation at altitudes from sea level to 40,000 ft, Mach numbers from 0 to 0.90, and ambient temperatures from -60 to 103 F. The package also includes a power-management system that allows the engine to be operated over a wide range of thrust levels throughout the full range of flight conditions.
Engineering genetic circuit interactions within and between synthetic minimal cells
NASA Astrophysics Data System (ADS)
Adamala, Katarzyna P.; Martin-Alarcon, Daniel A.; Guthrie-Honea, Katriona R.; Boyden, Edward S.
2017-05-01
Genetic circuits and reaction cascades are of great importance for synthetic biology, biochemistry and bioengineering. An open question is how to maximize the modularity of their design to enable the integration of different reaction networks and to optimize their scalability and flexibility. One option is encapsulation within liposomes, which enables chemical reactions to proceed in well-isolated environments. Here we adapt liposome encapsulation to enable the modular, controlled compartmentalization of genetic circuits and cascades. We demonstrate that it is possible to engineer genetic circuit-containing synthetic minimal cells (synells) to contain multiple-part genetic cascades, and that these cascades can be controlled by external signals as well as inter-liposomal communication without crosstalk. We also show that liposomes that contain different cascades can be fused in a controlled way so that the products of incompatible reactions can be brought together. Synells thus enable a more modular creation of synthetic biology cascades, an essential step towards their ultimate programmability.
Evolution of synthetic signaling scaffolds by recombination of modular protein domains.
Lai, Andicus; Sato, Paloma M; Peisajovich, Sergio G
2015-06-19
Signaling scaffolds are proteins that interact via modular domains with multiple partners, regulating signaling networks in space and time and providing an ideal platform from which to alter signaling functions. However, to better exploit scaffolds for signaling engineering, it is necessary to understand the full extent of their modularity. We used a directed evolution approach to identify, from a large library of randomly shuffled protein interaction domains, variants capable of rescuing the signaling defect of a yeast strain in which Ste5, the scaffold in the mating pathway, had been deleted. After a single round of selection, we identified multiple synthetic scaffold variants with diverse domain architectures, able to mediate mating pathway activation in a pheromone-dependent manner. The facility with which this signaling network accommodates changes in scaffold architecture suggests that the mating signaling complex does not possess a single, precisely defined geometry into which the scaffold has to fit. These relaxed geometric constraints may facilitate the evolution of signaling networks, as well as their engineering for applications in synthetic biology.
AN IMPROVEMENT TO THE MOUSE COMPUTERIZED UNCERTAINTY ANALYSIS SYSTEM
The original MOUSE (Modular Oriented Uncertainty System) system was designed to deal with the problem of uncertainties in Environmental engineering calculations, such as a set of engineering cast or risk analysis equations. It was especially intended for use by individuals with l...
NASA Astrophysics Data System (ADS)
Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.
2015-11-01
We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.
Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E
2014-01-01
Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.
The VERCE platform: Enabling Computational Seismology via Streaming Workflows and Science Gateways
NASA Astrophysics Data System (ADS)
Spinuso, Alessandro; Filgueira, Rosa; Krause, Amrey; Matser, Jonas; Casarotti, Emanuele; Magnoni, Federica; Gemund, Andre; Frobert, Laurent; Krischer, Lion; Atkinson, Malcolm
2015-04-01
The VERCE project is creating an e-Science platform to facilitate innovative data analysis and coding methods that fully exploit the wealth of data in global seismology. One of the technologies developed within the project is the Dispel4Py python library, which allows to describe abstract stream-based workflows for data-intensive applications and to execute them in a distributed environment. At runtime Dispel4Py is able to map workflow descriptions dynamically onto a number of computational resources (Apache Storm clusters, MPI powered clusters, and shared-memory multi-core machines, single-core machines), setting it apart from other workflow frameworks. Therefore, Dispel4Py enables scientists to focus on their computation instead of being distracted by details of the computing infrastructure they use. Among the workflows developed with Dispel4Py in VERCE, we mention here those for Seismic Ambient Noise Cross-Correlation and MISFIT calculation, which address two data-intensive problems that are common in computational seismology. The former, also called Passive Imaging, allows the detection of relative seismic-wave velocity variations during the time of recording, to be associated with the stress-field changes that occurred in the test area. The MISFIT instead, takes as input the synthetic seismograms generated from HPC simulations for a certain Earth model and earthquake and, after a preprocessing stage, compares them with real observations in order to foster subsequent model updates and improvement (Inversion). The VERCE Science Gateway exposes the MISFIT calculation workflow as a service, in combination with the simulation phase. Both phases can be configured, controlled and monitored by the user via a rich user interface which is integrated within the gUSE Science Gateway framework, hiding the complexity of accessing third parties data services, security mechanisms and enactment on the target resources. Thanks to a modular extension to the Dispel4Py framework, the system collects provenance data adopting the W3C-PROV data model. Provenance recordings can be explored and analysed at run time for rapid diagnostic and workflow steering, or later for further validation and comparisons across runs. We will illustrate the interactive services of the gateway and the capabilities of the produced metadata, coupled with the VERCE data management layer based on iRODS. The Cross-Correlation workflow was evaluated on SuperMUC, a supercomputing cluster at the Leibniz Supercomputing Centre in Munich, with 155,656 processor cores in 9400 compute nodes. SuperMUC is based on the Intel Xeon architecture consisting of 18 Thin Node Islands and one Fat Node Island. This work has only had access to the Thin Node Islands, which contain Sandy Bridge nodes, each having 16 cores and 32 GB of memory. In the evaluations we used 1000 stations, and we applied two types of methods (whiten and non-whiten) for pre-processing the data. The workflow was tested on a varying number of cores (16, 32, 64, 128, and 256 cores) using the MPI mapping of Dispel4Py. The results show that Dispel4Py is able to improve the performance by increasing the number of cores without changing the description of the workflow.
A web-based rapid assessment tool for production publishing solutions
NASA Astrophysics Data System (ADS)
Sun, Tong
2010-02-01
Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.
HTSstation: A Web Application and Open-Access Libraries for High-Throughput Sequencing Data Analysis
David, Fabrice P. A.; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J.; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch. PMID:24475057
Achieving Energy Savings in Municipal Construction in Long Beach California
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Long Beach Gas and Oil (LBGO), the public gas utility in Long Beach, California, partnered with the U.S. Department of Energy (DOE) to develop and implement solutions to build a new, low-energy modular office building that is at least 50% below requirements set by Energy Standard 90.1-2007 of the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE), the American National Standards Institute (ANSI), and the Illuminating Engineering Society of America (IESNA) as part of DOE’s Commercial Building Partnerships (CBP) program. The LBGO building, which demonstrates that modular construction can be very energy efficient, is expected to exceed the ASHRAEmore » baseline by about 45%.« less
Synthetic biology of antimicrobial discovery
Zakeri, Bijan; Lu, Timothy K.
2012-01-01
Antibiotic discovery has a storied history. From the discovery of penicillin by Sir Alexander Fleming to the relentless quest for antibiotics by Selman Waksman, the stories have become like folklore, used to inspire future generations of scientists. However, recent discovery pipelines have run dry at a time when multidrug resistant pathogens are on the rise. Nature has proven to be a valuable reservoir of antimicrobial agents, which are primarily produced by modularized biochemical pathways. Such modularization is well suited to remodeling by an interdisciplinary approach that spans science and engineering. Herein, we discuss the biological engineering of small molecules, peptides, and non-traditional antimicrobials and provide an overview of the growing applicability of synthetic biology to antimicrobials discovery. PMID:23654251
Synthetic biology of antimicrobial discovery.
Zakeri, Bijan; Lu, Timothy K
2013-07-19
Antibiotic discovery has a storied history. From the discovery of penicillin by Sir Alexander Fleming to the relentless quest for antibiotics by Selman Waksman, the stories have become like folklore used to inspire future generations of scientists. However, recent discovery pipelines have run dry at a time when multidrug-resistant pathogens are on the rise. Nature has proven to be a valuable reservoir of antimicrobial agents, which are primarily produced by modularized biochemical pathways. Such modularization is well suited to remodeling by an interdisciplinary approach that spans science and engineering. Herein, we discuss the biological engineering of small molecules, peptides, and non-traditional antimicrobials and provide an overview of the growing applicability of synthetic biology to antimicrobials discovery.
Lan, Hongzhi; Updegrove, Adam; Wilson, Nathan M; Maher, Gabriel D; Shadden, Shawn C; Marsden, Alison L
2018-02-01
Patient-specific simulation plays an important role in cardiovascular disease research, diagnosis, surgical planning and medical device design, as well as education in cardiovascular biomechanics. simvascular is an open-source software package encompassing an entire cardiovascular modeling and simulation pipeline from image segmentation, three-dimensional (3D) solid modeling, and mesh generation, to patient-specific simulation and analysis. SimVascular is widely used for cardiovascular basic science and clinical research as well as education, following increased adoption by users and development of a GATEWAY web portal to facilitate educational access. Initial efforts of the project focused on replacing commercial packages with open-source alternatives and adding increased functionality for multiscale modeling, fluid-structure interaction (FSI), and solid modeling operations. In this paper, we introduce a major SimVascular (SV) release that includes a new graphical user interface (GUI) designed to improve user experience. Additional improvements include enhanced data/project management, interactive tools to facilitate user interaction, new boundary condition (BC) functionality, plug-in mechanism to increase modularity, a new 3D segmentation tool, and new computer-aided design (CAD)-based solid modeling capabilities. Here, we focus on major changes to the software platform and outline features added in this new release. We also briefly describe our recent experiences using SimVascular in the classroom for bioengineering education.
Antares: A low cost modular launch vehicle for the future
NASA Technical Reports Server (NTRS)
1991-01-01
The single-stage-to-orbit launch vehicle Antares is a revolutionary concept based on identical modular units, enabling the Antares to efficiently launch communications satellites, as well as heavy payloads, into Earth orbit and beyond. The basic unit of the modular system, a single Antares vehicle, is aimed at launching approximately 10,000 kg (22,000 lb) into low Earth orbit (LEO). When coupled with a standard Centaur upper stage, it is capable of placing 4000 kg (8800 lb) into geosynchronous Earth orbit (GE0). The Antares incorporates a reusable engine, the Dual Mixture Ratio Engine (DMRE), as its propulsive device. This enables Antares to compete and excel in the satellite launch market by dramatically reducing launch costs. Inherent in the design is the capability to attach several of these vehicles together to provide heavy lift capability. Any number of these vehicles can be attached depending on the payload and mission requirements. With a seven-vehicle configuration, the Antares' modular concept provides a heavy lift capability of approximately 70,000 kg (154,000 lb) to LEO. This expandability allows for a wide range of payload options, such as large Earth satellites, Space Station Freedom material, and interplanetary spacecraft, and also offers a significant cost savings over a mixed fleet based on different launch vehicles.
Antares: A low cost modular launch vehicle for the future
NASA Astrophysics Data System (ADS)
The single-stage-to-orbit launch vehicle Antares is a revolutionary concept based on identical modular units, enabling the Antares to efficiently launch communications satellites, as well as heavy payloads, into Earth orbit and beyond. The basic unit of the modular system, a single Antares vehicle, is aimed at launching approximately 10,000 kg (22,000 lb) into low Earth orbit (LEO). When coupled with a standard Centaur upper stage, it is capable of placing 4000 kg (8800 lb) into geosynchronous Earth orbit (GE0). The Antares incorporates a reusable engine, the Dual Mixture Ratio Engine (DMRE), as its propulsive device. This enables Antares to compete and excel in the satellite launch market by dramatically reducing launch costs. Inherent in the design is the capability to attach several of these vehicles together to provide heavy lift capability. Any number of these vehicles can be attached depending on the payload and mission requirements. With a seven-vehicle configuration, the Antares' modular concept provides a heavy lift capability of approximately 70,000 kg (154,000 lb) to LEO. This expandability allows for a wide range of payload options, such as large Earth satellites, Space Station Freedom material, and interplanetary spacecraft, and also offers a significant cost savings over a mixed fleet based on different launch vehicles.
Modular Aero-Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Huei
2006-01-01
The Modular Aero-Propulsion System Simulation (MAPSS) is a graphical simulation environment designed for the development of advanced control algorithms and rapid testing of these algorithms on a generic computational model of a turbofan engine and its control system. MAPSS is a nonlinear, non-real-time simulation comprising a Component Level Model (CLM) module and a Controller-and-Actuator Dynamics (CAD) module. The CLM module simulates the dynamics of engine components at a sampling rate of 2,500 Hz. The controller submodule of the CAD module simulates a digital controller, which has a typical update rate of 50 Hz. The sampling rate for the actuators in the CAD module is the same as that of the CLM. MAPSS provides a graphical user interface that affords easy access to engine-operation, engine-health, and control parameters; is used to enter such input model parameters as power lever angle (PLA), Mach number, and altitude; and can be used to change controller and engine parameters. Output variables are selectable by the user. Output data as well as any changes to constants and other parameters can be saved and reloaded into the GUI later.
Yuzawa, Satoshi; Keasling, Jay D; Katz, Leonard
2017-04-01
Complex polyketides comprise a large number of natural products that have broad application in medicine and agriculture. They are produced in bacteria and fungi from large enzyme complexes named type I modular polyketide synthases (PKSs) that are composed of multifunctional polypeptides containing discrete enzymatic domains organized into modules. The modular nature of PKSs has enabled a multitude of efforts to engineer the PKS genes to produce novel polyketides of predicted structure. We have repurposed PKSs to produce a number of short-chain mono- and di-carboxylic acids and ketones that could have applications as fuels or industrial chemicals.
Linear aerospike engine. [for reusable single-stage-to-orbit vehicle
NASA Technical Reports Server (NTRS)
Kirby, F. M.; Martinez, A.
1977-01-01
A description is presented of a dual-fuel modular split-combustor linear aerospike engine concept. The considered engine represents an approach to an integrated engine for a reusable single-stage-to-orbit (SSTO) vehicle. The engine burns two fuels (hydrogen and a hydrocarbon) with oxygen in separate combustors. Combustion gases expand on a linear aerospike nozzle. An engine preliminary design is discussed. Attention is given to the evaluation process for selecting the optimum number of modules or divisions of the engine, aspects of cooling and power cycle balance, and details of engine operation.
Predicting the behavior of microfluidic circuits made from discrete elements
Bhargava, Krisna C.; Thompson, Bryant; Iqbal, Danish; Malmstadt, Noah
2015-01-01
Microfluidic devices can be used to execute a variety of continuous flow analytical and synthetic chemistry protocols with a great degree of precision. The growing availability of additive manufacturing has enabled the design of microfluidic devices with new functionality and complexity. However, these devices are prone to larger manufacturing variation than is typical of those made with micromachining or soft lithography. In this report, we demonstrate a design-for-manufacturing workflow that addresses performance variation at the microfluidic element and circuit level, in context of mass-manufacturing and additive manufacturing. Our approach relies on discrete microfluidic elements that are characterized by their terminal hydraulic resistance and associated tolerance. Network analysis is employed to construct simple analytical design rules for model microfluidic circuits. Monte Carlo analysis is employed at both the individual element and circuit level to establish expected performance metrics for several specific circuit configurations. A protocol based on osmometry is used to experimentally probe mixing behavior in circuits in order to validate these approaches. The overall workflow is applied to two application circuits with immediate use at on the bench-top: series and parallel mixing circuits that are modularly programmable, virtually predictable, highly precise, and operable by hand. PMID:26516059
Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.
2016-09-01
A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.
Modular extracellular sensor architecture for engineering mammalian cell-based devices.
Daringer, Nichole M; Dudek, Rachel M; Schwarz, Kelly A; Leonard, Joshua N
2014-12-19
Engineering mammalian cell-based devices that monitor and therapeutically modulate human physiology is a promising and emerging frontier in clinical synthetic biology. However, realizing this vision will require new technologies enabling engineered circuitry to sense and respond to physiologically relevant cues. No existing technology enables an engineered cell to sense exclusively extracellular ligands, including proteins and pathogens, without relying upon native cellular receptors or signal transduction pathways that may be subject to crosstalk with native cellular components. To address this need, we here report a technology we term a Modular Extracellular Sensor Architecture (MESA). This self-contained receptor and signal transduction platform is maximally orthogonal to native cellular processes and comprises independent, tunable protein modules that enable performance optimization and straightforward engineering of novel MESA that recognize novel ligands. We demonstrate ligand-inducible activation of MESA signaling, optimization of receptor performance using design-based approaches, and generation of MESA biosensors that produce outputs in the form of either transcriptional regulation or transcription-independent reconstitution of enzymatic activity. This systematic, quantitative platform characterization provides a framework for engineering MESA to recognize novel ligands and for integrating these sensors into diverse mammalian synthetic biology applications.
Modular uncooled video engines based on a DSP processor
NASA Astrophysics Data System (ADS)
Schapiro, F.; Milstain, Y.; Aharon, A.; Neboshchik, A.; Ben-Simon, Y.; Kogan, I.; Lerman, I.; Mizrahi, U.; Maayani, S.; Amsterdam, A.; Vaserman, I.; Duman, O.; Gazit, R.
2011-06-01
The market demand for low SWaP (Size, Weight and Power) uncooled engines keeps growing. Low SWaP is especially critical in battery-operated applications such as goggles and Thermal Weapon Sights. A new approach for the design of the engines was implemented by SCD to optimize size and power consumption at system level. The new approach described in the paper, consists of: 1. A modular hardware design that allows the user to define the exact level of integration needed for his system 2. An "open architecture" based on the OMAPTM530 DSP that allows the integrator to take advantage of unused hardware (FPGA) and software (DSP) resources, for implementation of additional algorithms or functionality. The approach was successfully implemented on the first generation of 25μm pitch BIRD detectors, and more recently on the new, 640 x480, 17 μm pitch detector.
MBAT: a scalable informatics system for unifying digital atlasing workflows.
Lee, Daren; Ruffins, Seth; Ng, Queenie; Sane, Nikhil; Anderson, Steve; Toga, Arthur
2010-12-22
Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows.
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition
2017-01-01
004 OFFICE OF NAVAL RESEARCH ATTN JASON STACK MINE WARFARE & OCEAN ENGINEERING PROGRAMS CODE 32, SUITE 1092 875 N RANDOLPH ST ARLINGTON VA 22203 ONR...naval mine countermeasures (MCM) operations by automating a large portion of the data analysis. Successful long-term implementation of ATR requires a...Modular Algorithm Testbed Suite; MATS; Mine Countermeasures Operations U U U SAR 24 Derek R. Kolacinski (850) 230-7218 THIS PAGE INTENTIONALLY LEFT
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.
2006-03-01
One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.
2008-12-01
NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.
2009-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
Breaching the Phalanx: Developing a More Engineer-Centric Modular BCT
2007-06-05
Obersturmbannfuehrer Jaochim Peiper’s Kampfgruppe Peiper (of the I SS Panzer Corps).42 The 1111th Engineer commander visualized a defensive scheme for the...the engineer planner scheduled a planning session with the I MEF engineer staff. This planning session was held at Camp Pendleton in September...covered sleeping /work areas, ammo storage, etc.). As the units did not begin the invasion until 20 March, this meant the BCTs lived in abject squalor
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054
NASA Astrophysics Data System (ADS)
Tomlin, M. C.; Jenkyns, R.
2015-12-01
Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.
A fully actuated robotic assistant for MRI-guided prostate biopsy and brachytherapy
NASA Astrophysics Data System (ADS)
Li, Gang; Su, Hao; Shang, Weijian; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fischer, Gregory S.
2013-03-01
Intra-operative medical imaging enables incorporation of human experience and intelligence in a controlled, closed-loop fashion. Magnetic resonance imaging (MRI) is an ideal modality for surgical guidance of diagnostic and therapeutic procedures, with its ability to perform high resolution, real-time, high soft tissue contrast imaging without ionizing radiation. However, for most current image-guided approaches only static pre-operative images are accessible for guidance, which are unable to provide updated information during a surgical procedure. The high magnetic field, electrical interference, and limited access of closed-bore MRI render great challenges to developing robotic systems that can perform inside a diagnostic high-field MRI while obtaining interactively updated MR images. To overcome these limitations, we are developing a piezoelectrically actuated robotic assistant for actuated percutaneous prostate interventions under real-time MRI guidance. Utilizing a modular design, the system enables coherent and straight forward workflow for various percutaneous interventions, including prostate biopsy sampling and brachytherapy seed placement, using various needle driver configurations. The unified workflow compromises: 1) system hardware and software initialization, 2) fiducial frame registration, 3) target selection and motion planning, 4) moving to the target and performing the intervention (e.g. taking a biopsy sample) under live imaging, and 5) visualization and verification. Phantom experiments of prostate biopsy and brachytherapy were executed under MRI-guidance to evaluate the feasibility of the workflow. The robot successfully performed fully actuated biopsy sampling and delivery of simulated brachytherapy seeds under live MR imaging, as well as precise delivery of a prostate brachytherapy seed distribution with an RMS accuracy of 0.98mm.
Eijssen, Lars M T; Goelela, Varshna S; Kelder, Thomas; Adriaens, Michiel E; Evelo, Chris T; Radonjic, Marijana
2015-06-30
Illumina whole-genome expression bead arrays are a widely used platform for transcriptomics. Most of the tools available for the analysis of the resulting data are not easily applicable by less experienced users. ArrayAnalysis.org provides researchers with an easy-to-use and comprehensive interface to the functionality of R and Bioconductor packages for microarray data analysis. As a modular open source project, it allows developers to contribute modules that provide support for additional types of data or extend workflows. To enable data analysis of Illumina bead arrays for a broad user community, we have developed a module for ArrayAnalysis.org that provides a free and user-friendly web interface for quality control and pre-processing for these arrays. This module can be used together with existing modules for statistical and pathway analysis to provide a full workflow for Illumina gene expression data analysis. The module accepts data exported from Illumina's GenomeStudio, and provides the user with quality control plots and normalized data. The outputs are directly linked to the existing statistics module of ArrayAnalysis.org, but can also be downloaded for further downstream analysis in third-party tools. The Illumina bead arrays analysis module is available at http://www.arrayanalysis.org . A user guide, a tutorial demonstrating the analysis of an example dataset, and R scripts are available. The module can be used as a starting point for statistical evaluation and pathway analysis provided on the website or to generate processed input data for a broad range of applications in life sciences research.
Flexible Description Language for HPC based Processing of Remote Sense Data
NASA Astrophysics Data System (ADS)
Nandra, Constantin; Gorgan, Dorian; Bacu, Victor
2016-04-01
When talking about Big Data, the most challenging aspect lays in processing them in order to gain new insight, find new patterns and gain knowledge from them. This problem is likely most apparent in the case of Earth Observation (EO) data. With ever higher numbers of data sources and increasing data acquisition rates, dealing with EO data is indeed a challenge [1]. Geoscientists should address this challenge by using flexible and efficient tools and platforms. To answer this trend, the BigEarth project [2] aims to combine the advantages of high performance computing solutions with flexible processing description methodologies in order to reduce both task execution times and task definition time and effort. As a component of the BigEarth platform, WorDeL (Workflow Description Language) [3] is intended to offer a flexible, compact and modular approach to the task definition process. WorDeL, unlike other description alternatives such as Python or shell scripts, is oriented towards the description topologies, using them as abstractions for the processing programs. This feature is intended to make it an attractive alternative for users lacking in programming experience. By promoting modular designs, WorDeL not only makes the processing descriptions more user-readable and intuitive, but also helps organizing the processing tasks into independent sub-tasks, which can be executed in parallel on multi-processor platforms in order to improve execution times. As a BigEarth platform [4] component, WorDeL represents the means by which the user interacts with the system, describing processing algorithms in terms of existing operators and workflows [5], which are ultimately translated into sets of executable commands. The WorDeL language has been designed to help in the definition of compute-intensive, batch tasks which can be distributed and executed on high-performance, cloud or grid-based architectures in order to improve the processing time. Main references for further information: [1] Gorgan, D., "Flexible and Adaptive Processing of Earth Observation Data over High Performance Computation Architectures", International Conference and Exhibition Satellite 2015, August 17-19, Houston, Texas, USA. [2] Bigearth project - flexible processing of big earth data over high performance computing architectures. http://cgis.utcluj.ro/bigearth, (2014) [3] Nandra, C., Gorgan, D., "Workflow Description Language for Defining Big Earth Data Processing Tasks", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 461-468, (2015). [4] Bacu, V., Stefan, T., Gorgan, D., "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015). [5] Mihon, D., Bacu, V., Colceriu, V., Gorgan, D., "Modeling of Earth Observation Use Cases through the KEOPS System", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp. 455-460, (2015).
NASA Astrophysics Data System (ADS)
Memon, Shahbaz; Vallot, Dorothée; Zwinger, Thomas; Neukirchen, Helmut
2017-04-01
Scientific communities generate complex simulations through orchestration of semi-structured analysis pipelines which involves execution of large workflows on multiple, distributed and heterogeneous computing and data resources. Modeling ice dynamics of glaciers requires workflows consisting of many non-trivial, computationally expensive processing tasks which are coupled to each other. From this domain, we present an e-Science use case, a workflow, which requires the execution of a continuum ice flow model and a discrete element based calving model in an iterative manner. Apart from the execution, this workflow also contains data format conversion tasks that support the execution of ice flow and calving by means of transition through sequential, nested and iterative steps. Thus, the management and monitoring of all the processing tasks including data management and transfer of the workflow model becomes more complex. From the implementation perspective, this workflow model was initially developed on a set of scripts using static data input and output references. In the course of application usage when more scripts or modifications introduced as per user requirements, the debugging and validation of results were more cumbersome to achieve. To address these problems, we identified a need to have a high-level scientific workflow tool through which all the above mentioned processes can be achieved in an efficient and usable manner. We decided to make use of the e-Science middleware UNICORE (Uniform Interface to Computing Resources) that allows seamless and automated access to different heterogenous and distributed resources which is supported by a scientific workflow engine. Based on this, we developed a high-level scientific workflow model for coupling of massively parallel High-Performance Computing (HPC) jobs: a continuum ice sheet model (Elmer/Ice) and a discrete element calving and crevassing model (HiDEM). In our talk we present how the use of a high-level scientific workflow middleware enables reproducibility of results more convenient and also provides a reusable and portable workflow template that can be deployed across different computing infrastructures. Acknowledgements This work was kindly supported by NordForsk as part of the Nordic Center of Excellence (NCoE) eSTICC (eScience Tools for Investigating Climate Change at High Northern Latitudes) and the Top-level Research Initiative NCoE SVALI (Stability and Variation of Arctic Land Ice).
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2015-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
MAGMA: analysis of two-channel microarrays made easy.
Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph
2007-07-01
The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.
Stability Analysis of Distributed Engine Control Systems Under Communication Packet Drop (Postprint)
2008-07-01
use, modify, reproduce, release, perform, display, or disclose the work. 14. ABSTRACT Currently, Full Authority Digital Engine Control ( FADEC ...based on a centralized architecture framework is being widely used for gas turbine engine control. However, current FADEC is not able to meet the...system (DEC). FADEC based on Distributed Control Systems (DCS) offers modularity, improved control systems prognostics and fault tolerance along with
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture (Postprint)
2007-09-18
TERMS turbine engine control, engine health management, FADEC , Universal FADEC , Distributed Controls, UF, UF Platform, common FADEC , Generic FADEC ...Modular FADEC , Adaptive Control 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON (Monitor) a. REPORT Unclassified b. ABSTRACT...Eventually the Full Authority Digital Electronic Control ( FADEC ) became the norm. Presently, this control system architecture accounts for 15 to 20% of
Fast and Efficient Feature Engineering for Multi-Cohort Analysis of EHR Data.
Ozery-Flato, Michal; Yanover, Chen; Gottlieb, Assaf; Weissbrod, Omer; Parush Shear-Yashuv, Naama; Goldschmidt, Yaara
2017-01-01
We present a framework for feature engineering, tailored for longitudinal structured data, such as electronic health records (EHRs). To fast-track feature engineering and extraction, the framework combines general-use plug-in extractors, a multi-cohort management mechanism, and modular memoization. Using this framework, we rapidly extracted thousands of features from diverse and large healthcare data sources in multiple projects.
Choi, Sun Young; Lee, Hyun Jeong; Choi, Jaeyeon; Kim, Jiye; Sim, Sang Jun; Um, Youngsoon; Kim, Yunje; Lee, Taek Soon; Keasling, Jay D; Woo, Han Min
2016-01-01
Metabolic engineering of cyanobacteria has enabled photosynthetic conversion of CO2 to value-added chemicals as bio-solar cell factories. However, the production levels of isoprenoids in engineered cyanobacteria were quite low, compared to other microbial hosts. Therefore, modular optimization of multiple gene expressions for metabolic engineering of cyanobacteria is required for the production of farnesyl diphosphate-derived isoprenoids from CO2. Here, we engineered Synechococcus elongatus PCC 7942 with modular metabolic pathways consisting of the methylerythritol phosphate pathway enzymes and the amorphadiene synthase for production of amorpha-4,11-diene, resulting in significantly increased levels (23-fold) of amorpha-4,11-diene (19.8 mg/L) in the best strain relative to a parental strain. Replacing amorphadiene synthase with squalene synthase led to the synthesis of a high amount of squalene (4.98 mg/L/OD730). Overexpression of farnesyl diphosphate synthase is the most critical factor for the significant production, whereas overexpression of 1-deoxy-d-xylulose 5-phosphate reductase is detrimental to the cell growth and the production. Additionally, the cyanobacterial growth inhibition was alleviated by expressing a terpene synthase in S. elongatus PCC 7942 strain with the optimized MEP pathway only (SeHL33). This is the first demonstration of photosynthetic production of amorpha-4,11-diene from CO2 in cyanobacteria and production of squalene in S. elongatus PCC 7942. Our optimized modular OverMEP strain (SeHL33) with either co-expression of ADS or SQS demonstrated the highest production levels of amorpha-4,11-diene and squalene, which could expand the list of farnesyl diphosphate-derived isoprenoids from CO2 as bio-solar cell factories.
ESO Reflex: A Graphical Workflow Engine for Data Reduction
NASA Astrophysics Data System (ADS)
Hook, R.; Romaniello, M.; Péron, M.; Ballester, P.; Gabasch, A.; Izzo, C.; Ullgrén, M.; Maisala, S.; Oittinen, T.; Solin, O.; Savolainen, V.; Järveläinen, P.; Tyynelä, J.
2008-08-01
Sampo {http://www.eso.org/sampo} (Hook et al. 2005) is a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal is to assess the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Those prototypes will not only be used to validate concepts and understand requirements but will also be tools of immediate value for the community. Most of the raw data produced by ESO instruments can be reduced using CPL {http://www.eso.org/cpl} recipes: compiled C programs following an ESO standard and utilizing routines provided by the Common Pipeline Library. Currently reduction recipes are run in batch mode as part of the data flow system to generate the input to the ESO VLT/VLTI quality control process and are also made public for external users. Sampo has developed a prototype application called ESO Reflex {http://www.eso.org/sampo/reflex/} that integrates a graphical user interface and existing data reduction algorithms. ESO Reflex can invoke CPL-based recipes in a flexible way through a dedicated interface. ESO Reflex is based on the graphical workflow engine Taverna {http://taverna.sourceforge.net} that was originally developed by the UK eScience community, mostly for work in the life sciences. Workflows have been created so far for three VLT/VLTI instrument modes ( VIMOS/IFU {http://www.eso.org/instruments/vimos/}, FORS spectroscopy {http://www.eso.org/instruments/fors/} and AMBER {http://www.eso.org/instruments/amber/}), and the easy-to-use GUI allows the user to make changes to these or create workflows of their own. Python scripts and IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available.
Introduction to COFFE: The Next-Generation HPCMP CREATE-AV CFD Solver
NASA Technical Reports Server (NTRS)
Glasby, Ryan S.; Erwin, J. Taylor; Stefanski, Douglas L.; Allmaras, Steven R.; Galbraith, Marshall C.; Anderson, W. Kyle; Nichols, Robert H.
2016-01-01
HPCMP CREATE-AV Conservative Field Finite Element (COFFE) is a modular, extensible, robust numerical solver for the Navier-Stokes equations that invokes modularity and extensibility from its first principles. COFFE implores a flexible, class-based hierarchy that provides a modular approach consisting of discretization, physics, parallelization, and linear algebra components. These components are developed with modern software engineering principles to ensure ease of uptake from a user's or developer's perspective. The Streamwise Upwind/Petrov-Galerkin (SU/PG) method is utilized to discretize the compressible Reynolds-Averaged Navier-Stokes (RANS) equations tightly coupled with a variety of turbulence models. The mathematics and the philosophy of the methodology that makes up COFFE are presented.
Design of a modular digital computer system, CDRL no. D001, final design plan
NASA Technical Reports Server (NTRS)
Easton, R. A.
1975-01-01
The engineering breadboard implementation for the CDRL no. D001 modular digital computer system developed during design of the logic system was documented. This effort followed the architecture study completed and documented previously, and was intended to verify the concepts of a fault tolerant, automatically reconfigurable, modular version of the computer system conceived during the architecture study. The system has a microprogrammed 32 bit word length, general register architecture and an instruction set consisting of a subset of the IBM System 360 instruction set plus additional fault tolerance firmware. The following areas were covered: breadboard packaging, central control element, central processing element, memory, input/output processor, and maintenance/status panel and electronics.
Modular assembly of thick multifunctional cardiac patches
Fleischer, Sharon; Shapira, Assaf; Feiner, Ron; Dvir, Tal
2017-01-01
In cardiac tissue engineering cells are seeded within porous biomaterial scaffolds to create functional cardiac patches. Here, we report on a bottom-up approach to assemble a modular tissue consisting of multiple layers with distinct structures and functions. Albumin electrospun fiber scaffolds were laser-patterned to create microgrooves for engineering aligned cardiac tissues exhibiting anisotropic electrical signal propagation. Microchannels were patterned within the scaffolds and seeded with endothelial cells to form closed lumens. Moreover, cage-like structures were patterned within the scaffolds and accommodated poly(lactic-co-glycolic acid) (PLGA) microparticulate systems that controlled the release of VEGF, which promotes vascularization, or dexamethasone, an anti-inflammatory agent. The structure, morphology, and function of each layer were characterized, and the tissue layers were grown separately in their optimal conditions. Before transplantation the tissue and microparticulate layers were integrated by an ECM-based biological glue to form thick 3D cardiac patches. Finally, the patches were transplanted in rats, and their vascularization was assessed. Because of the simple modularity of this approach, we believe that it could be used in the future to assemble other multicellular, thick, 3D, functional tissues. PMID:28167795
Specific and Modular Binding Code for Cytosine Recognition in Pumilio/FBF (PUF) RNA-binding Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Shuyun; Wang, Yang; Cassidy-Amstutz, Caleb
2011-10-28
Pumilio/fem-3 mRNA-binding factor (PUF) proteins possess a recognition code for bases A, U, and G, allowing designed RNA sequence specificity of their modular Pumilio (PUM) repeats. However, recognition side chains in a PUM repeat for cytosine are unknown. Here we report identification of a cytosine-recognition code by screening random amino acid combinations at conserved RNA recognition positions using a yeast three-hybrid system. This C-recognition code is specific and modular as specificity can be transferred to different positions in the RNA recognition sequence. A crystal structure of a modified PUF domain reveals specific contacts between an arginine side chain and themore » cytosine base. We applied the C-recognition code to design PUF domains that recognize targets with multiple cytosines and to generate engineered splicing factors that modulate alternative splicing. Finally, we identified a divergent yeast PUF protein, Nop9p, that may recognize natural target RNAs with cytosine. This work deepens our understanding of natural PUF protein target recognition and expands the ability to engineer PUF domains to recognize any RNA sequence.« less
Rapid Energy Modeling Workflow Demonstration Project
2014-01-01
Conditioning Engineers BIM Building Information Model BLCC building life cycle costs BPA Building Performance Analysis CAD computer assisted...invited to enroll in the Autodesk Building Performance Analysis ( BPA ) Certificate Program under a group 30 specifically for DoD installation
Metabolic modelling in the development of cell factories by synthetic biology
Jouhten, Paula
2012-01-01
Cell factories are commonly microbial organisms utilized for bioconversion of renewable resources to bulk or high value chemicals. Introduction of novel production pathways in chassis strains is the core of the development of cell factories by synthetic biology. Synthetic biology aims to create novel biological functions and systems not found in nature by combining biology with engineering. The workflow of the development of novel cell factories with synthetic biology is ideally linear which will be attainable with the quantitative engineering approach, high-quality predictive models, and libraries of well-characterized parts. Different types of metabolic models, mathematical representations of metabolism and its components, enzymes and metabolites, are useful in particular phases of the synthetic biology workflow. In this minireview, the role of metabolic modelling in synthetic biology will be discussed with a review of current status of compatible methods and models for the in silico design and quantitative evaluation of a cell factory. PMID:24688669
Automatic Earth observation data service based on reusable geo-processing workflow
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min
2008-12-01
A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.
Project Antares: A low cost modular launch vehicle for the future
NASA Astrophysics Data System (ADS)
Aarnio, Steve; Anderson, Hobie; Arzaz, El Mehdi; Bailey, Michelle; Beeghly, Jeff; Cartwright, Curt; Chau, William; Dawdy, Andrew; Detert, Bruce; Ervin, Miles
1991-06-01
The single stage to orbit launch vehicle Antares is based upon the revolutionary concept of modularity, enabling the Antares to efficiently launch communications satellites, as well as heavy payloads, into Earth's orbit and beyond. The basic unit of the modular system, a single Antares vehicle, is aimed at launching approximately 10,000 kg into low Earth orbit (LEO). When coupled with a Centaur upper stage it is capable of placing 3500 kg into geostationary orbit. The Antares incorporates a reusable engine, the Dual Mixture Ratio Engine (DMRE), as its propulsive device. This enables Antares to compete and excel in the satellite launch market by dramatically reducing launch costs. Antares' projected launch costs are $1340 per kg to LEO which offers a tremendous savings over launch vehicles available today. Inherent in the design is the capability to attach several of these vehicles together to provide heavy lift capability. Any number of these vehicles, up to seven, can be attached depending on the payload and mission requirements. With a seven vehicle configuration Antares's modular concept provides a heavy lift capability of approximately 70,000 kg to LEO. This expandability allows for a wider range of payload options such as large Earth satellites, Space Station Freedom support, and interplanetary spacecraft, and also offers a significant cost savings over a mixed fleet based on different launch vehicles.
Project Antares: A low cost modular launch vehicle for the future
NASA Technical Reports Server (NTRS)
Aarnio, Steve; Anderson, Hobie; Arzaz, El Mehdi; Bailey, Michelle; Beeghly, Jeff; Cartwright, Curt; Chau, William; Dawdy, Andrew; Detert, Bruce; Ervin, Miles
1991-01-01
The single stage to orbit launch vehicle Antares is based upon the revolutionary concept of modularity, enabling the Antares to efficiently launch communications satellites, as well as heavy payloads, into Earth's orbit and beyond. The basic unit of the modular system, a single Antares vehicle, is aimed at launching approximately 10,000 kg into low Earth orbit (LEO). When coupled with a Centaur upper stage it is capable of placing 3500 kg into geostationary orbit. The Antares incorporates a reusable engine, the Dual Mixture Ratio Engine (DMRE), as its propulsive device. This enables Antares to compete and excel in the satellite launch market by dramatically reducing launch costs. Antares' projected launch costs are $1340 per kg to LEO which offers a tremendous savings over launch vehicles available today. Inherent in the design is the capability to attach several of these vehicles together to provide heavy lift capability. Any number of these vehicles, up to seven, can be attached depending on the payload and mission requirements. With a seven vehicle configuration Antares's modular concept provides a heavy lift capability of approximately 70,000 kg to LEO. This expandability allows for a wider range of payload options such as large Earth satellites, Space Station Freedom support, and interplanetary spacecraft, and also offers a significant cost savings over a mixed fleet based on different launch vehicles.
2005-06-01
cognitive task analysis , organizational information dissemination and interaction, systems engineering, collaboration and communications processes, decision-making processes, and data collection and organization. By blending these diverse disciplines command centers can be designed to support decision-making, cognitive analysis, information technology, and the human factors engineering aspects of Command and Control (C2). This model can then be used as a baseline when dealing with work in areas of business processes, workflow engineering, information management,
A Mixed-Methods Research Framework for Healthcare Process Improvement.
Bastian, Nathaniel D; Munoz, David; Ventura, Marta
2016-01-01
The healthcare system in the United States is spiraling out of control due to ever-increasing costs without significant improvements in quality, access to care, satisfaction, and efficiency. Efficient workflow is paramount to improving healthcare value while maintaining the utmost standards of patient care and provider satisfaction in high stress environments. This article provides healthcare managers and quality engineers with a practical healthcare process improvement framework to assess, measure and improve clinical workflow processes. The proposed mixed-methods research framework integrates qualitative and quantitative tools to foster the improvement of processes and workflow in a systematic way. The framework consists of three distinct phases: 1) stakeholder analysis, 2a) survey design, 2b) time-motion study, and 3) process improvement. The proposed framework is applied to the pediatric intensive care unit of the Penn State Hershey Children's Hospital. The implementation of this methodology led to identification and categorization of different workflow tasks and activities into both value-added and non-value added in an effort to provide more valuable and higher quality patient care. Based upon the lessons learned from the case study, the three-phase methodology provides a better, broader, leaner, and holistic assessment of clinical workflow. The proposed framework can be implemented in various healthcare settings to support continuous improvement efforts in which complexity is a daily element that impacts workflow. We proffer a general methodology for process improvement in a healthcare setting, providing decision makers and stakeholders with a useful framework to help their organizations improve efficiency. Published by Elsevier Inc.
Engineering of In Vitro 3D Capillary Beds by Self-Directed Angiogenic Sprouting
Chan, Juliana M.; Zervantonakis, Ioannis K.; Rimchala, Tharathorn; Polacheck, William J.; Whisler, Jordan; Kamm, Roger D.
2012-01-01
In recent years, microfluidic systems have been used to study fundamental aspects of angiogenesis through the patterning of single-layered, linear or geometric vascular channels. In vivo, however, capillaries exist in complex, three-dimensional (3D) networks, and angiogenic sprouting occurs with a degree of unpredictability in all x,y,z planes. The ability to generate capillary beds in vitro that can support thick, biological tissues remains a key challenge to the regeneration of vital organs. Here, we report the engineering of 3D capillary beds in an in vitro microfluidic platform that is comprised of a biocompatible collagen I gel supported by a mechanical framework of alginate beads. The engineered vessels have patent lumens, form robust ∼1.5 mm capillary networks across the devices, and support the perfusion of 1 µm fluorescent beads through them. In addition, the alginate beads offer a modular method to encapsulate and co-culture cells that either promote angiogenesis or require perfusion for cell viability in engineered tissue constructs. This laboratory-constructed vascular supply may be clinically significant for the engineering of capillary beds and higher order biological tissues in a scalable and modular manner. PMID:23226527
Dikina, Anna D; Strobel, Hannah A; Lai, Bradley P; Rolle, Marsha W; Alsberg, Eben
2015-06-01
There is a critical need to engineer a neotrachea because currently there are no long-term treatments for tracheal stenoses affecting large portions of the airway. In this work, a modular tracheal tissue replacement strategy was developed. High-cell density, scaffold-free human mesenchymal stem cell-derived cartilaginous rings and tubes were successfully generated through employment of custom designed culture wells and a ring-to-tube assembly system. Furthermore, incorporation of transforming growth factor-β1-delivering gelatin microspheres into the engineered tissues enhanced chondrogenesis with regard to tissue size and matrix production and distribution in the ring- and tube-shaped constructs, as well as luminal rigidity of the tubes. Importantly, all engineered tissues had similar or improved biomechanical properties compared to rat tracheas, which suggests they could be transplanted into a small animal model for airway defects. The modular, bottom up approach used to grow stem cell-based cartilaginous tubes in this report is a promising platform to engineer complex organs (e.g., trachea), with control over tissue size and geometry, and has the potential to be used to generate autologous tissue implants for human clinical applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spreter Von Kreudenstein, Thomas; Lario, Paula I; Dixit, Surjit B
2014-01-01
Computational and structure guided methods can make significant contributions to the development of solutions for difficult protein engineering problems, including the optimization of next generation of engineered antibodies. In this paper, we describe a contemporary industrial antibody engineering program, based on hypothesis-driven in silico protein optimization method. The foundational concepts and methods of computational protein engineering are discussed, and an example of a computational modeling and structure-guided protein engineering workflow is provided for the design of best-in-class heterodimeric Fc with high purity and favorable biophysical properties. We present the engineering rationale as well as structural and functional characterization data on these engineered designs. Copyright © 2013 Elsevier Inc. All rights reserved.
Sensor Webs with a Service-Oriented Architecture for On-demand Science Products
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Ungar, Stephen; Ames, Troy; Justice, Chris; Frye, Stuart; Chien, Steve; Tran, Daniel; Cappelaere, Patrice; Derezinsfi, Linda; Paules, Granville;
2007-01-01
This paper describes the work being managed by the NASA Goddard Space Flight Center (GSFC) Information System Division (ISD) under a NASA Earth Science Technology Ofice (ESTO) Advanced Information System Technology (AIST) grant to develop a modular sensor web architecture which enables discovery of sensors and workflows that can create customized science via a high-level service-oriented architecture based on Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) web service standards. These capabilities serve as a prototype to a user-centric architecture for Global Earth Observing System of Systems (GEOSS). This work builds and extends previous sensor web efforts conducted at NASA/GSFC using the Earth Observing 1 (EO-1) satellite and other low-earth orbiting satellites.
A Modular Approach to Integrating Biofuels Education into ChE Curriculum Part I--Learning Materials
ERIC Educational Resources Information Center
He, Q. Peter; Wang, Jin; Zhang, Rong; Johnson, Donald; Knight, Andrew; Polala, Ravali
2016-01-01
In view of potential demand for skilled engineers and competent researchers in the biofuels field, we have identified a significant gap between advanced biofuels research and undergraduate biofuels education in chemical engineering. To help bridge this gap, we created educational materials that systematically integrate biofuels technologies into…
Urciuolo, F; Garziano, A; Imparato, G; Panzetta, V; Fusco, S; Casale, C; Netti, P A
2016-01-29
The fabrication of functional tissue units is one of the major challenges in tissue engineering due to their in vitro use in tissue-on-chip systems, as well as in modular tissue engineering for the construction of macrotissue analogs. In this work, we aim to engineer dermal tissue micromodules obtained by culturing human dermal fibroblasts into porous gelatine microscaffold. We proved that such stromal cells coupled with gelatine microscaffolds are able to synthesize and to assemble an endogenous extracellular matrix (ECM) resulting in tissue micromodules, which evolve their biophysical features over the time. In particular, we found a time-dependent variation of oxygen consumption kinetic parameters, of newly formed ECM stiffness and of micromodules self-aggregation properties. As consequence when used as building blocks to fabricate larger tissues, the initial tissue micromodules state strongly affects the ECM organization and maturation in the final macrotissue. Such results highlight the role of the micromodules properties in controlling the formation of three-dimensional macrotissue in vitro, defining an innovative design criterion for selecting tissue-building blocks for modular tissue engineering.
Design of Modular, Shape-transitioning Inlets for a Conical Hypersonic Vehicle
NASA Technical Reports Server (NTRS)
Gollan, Rowan J.; Smart, Michael K.
2010-01-01
For a hypersonic vehicle, propelled by scramjet engines, integration of the engines and airframe is highly desirable. Thus, the forward capture shape of the engine inlet should conform to the vehicle body shape. Furthermore, the use of modular engines places a constraint on the shape of the inlet sidewalls. Finally, one may desire a combustor cross- section shape that is different from that of the inlet. These shape constraints for the inlet can be accommodated by employing a streamline-tracing and lofting technique. This design technique was developed by Smart for inlets with a rectangular-to-elliptical shape transition. In this paper, we generalise that technique to produce inlets that conform to arbitrary shape requirements. As an example, we show the design of a body-integrated hypersonic inlet on a winged-cone vehicle, typical of what might be used in a three-stage orbital launch system. The special challenge of inlet design for this conical vehicle at an angle-of-attack is also discussed. That challenge is that the bow shock sits relatively close to the vehicle body.
NASA Astrophysics Data System (ADS)
Hwang, Darryl H.; Ma, Kevin; Yepes, Fernando; Nadamuni, Mridula; Nayyar, Megha; Liu, Brent; Duddalwar, Vinay; Lepore, Natasha
2015-12-01
A conventional radiology report primarily consists of a large amount of unstructured text, and lacks clear, concise, consistent and content-rich information. Hence, an area of unmet clinical need consists of developing better ways to communicate radiology findings and information specific to each patient. Here, we design a new workflow and reporting system that combines and integrates advances in engineering technology with those from the medical sciences, the Multidimensional Interactive Radiology Report and Analysis (MIRRA). Until recently, clinical standards have primarily relied on 2D images for the purpose of measurement, but with the advent of 3D processing, many of the manually measured metrics can be automated, leading to better reproducibility and less subjective measurement placement. Hence, we make use this newly available 3D processing in our workflow. Our pipeline is used here to standardize the labeling, tracking, and quantifying of metrics for renal masses.
OpenWorm: an open-science approach to modeling Caenorhabditis elegans.
Szigeti, Balázs; Gleeson, Padraig; Vella, Michael; Khayrulin, Sergey; Palyanov, Andrey; Hokanson, Jim; Currie, Michael; Cantarelli, Matteo; Idili, Giovanni; Larson, Stephen
2014-01-01
OpenWorm is an international collaboration with the aim of understanding how the behavior of Caenorhabditis elegans (C. elegans) emerges from its underlying physiological processes. The project has developed a modular simulation engine to create computational models of the worm. The modularity of the engine makes it possible to easily modify the model, incorporate new experimental data and test hypotheses. The modeling framework incorporates both biophysical neuronal simulations and a novel fluid-dynamics-based soft-tissue simulation for physical environment-body interactions. The project's open-science approach is aimed at overcoming the difficulties of integrative modeling within a traditional academic environment. In this article the rationale is presented for creating the OpenWorm collaboration, the tools and resources developed thus far are outlined and the unique challenges associated with the project are discussed.
2012-01-01
Visualization and analysis of molecular networks are both central to systems biology. However, there still exists a large technological gap between them, especially when assessing multiple network levels or hierarchies. Here we present RedeR, an R/Bioconductor package combined with a Java core engine for representing modular networks. The functionality of RedeR is demonstrated in two different scenarios: hierarchical and modular organization in gene co-expression networks and nested structures in time-course gene expression subnetworks. Our results demonstrate RedeR as a new framework to deal with the multiple network levels that are inherent to complex biological systems. RedeR is available from http://bioconductor.org/packages/release/bioc/html/RedeR.html. PMID:22531049
Lv, Xiaomei; Gu, Jiali; Wang, Fan; Xie, Wenping; Liu, Min; Ye, Lidan; Yu, Hongwei
2016-12-01
Metabolic engineering of microorganisms for heterologous biosynthesis is a promising route to sustainable chemical production which attracts increasing research and industrial interest. However, the efficiency of microbial biosynthesis is often restricted by insufficient activity of pathway enzymes and unbalanced utilization of metabolic intermediates. This work presents a combinatorial strategy integrating modification of multiple rate-limiting enzymes and modular pathway engineering to simultaneously improve intra- and inter-pathway balance, which might be applicable for a range of products, using isoprene as an example product. For intra-module engineering within the methylerythritol-phosphate (MEP) pathway, directed co-evolution of DXS/DXR/IDI was performed adopting a lycopene-indicated high-throughput screening method developed herein, leading to 60% improvement of isoprene production. In addition, inter-module engineering between the upstream MEP pathway and the downstream isoprene-forming pathway was conducted via promoter manipulation, which further increased isoprene production by 2.94-fold compared to the recombinant strain with solely protein engineering and 4.7-fold compared to the control strain containing wild-type enzymes. These results demonstrated the potential of pathway optimization in isoprene overproduction as well as the effectiveness of combining metabolic regulation and protein engineering in improvement of microbial biosynthesis. Biotechnol. Bioeng. 2016;113: 2661-2669. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pierce, S. A.; Hardesty Lewis, D.
2017-12-01
MODFLOW (MF) has served for decades as a de facto standard for groundwater modelling. Despite successive versions, legacy MF-96 simulations are still commonly encountered cases. Such is the case for many of the groundwater availability models of the State of Texas. Unfortunately, even the existence of converters to MF's newer versions has not necessarily stimulated their adoption, let alone re-creation of legacy models. This state of affairs may be due to the unfamiliarity of the modeller with the terminal or the FORTRAN programming language, resulting in an inability to address the minor or major bugs, nuances, or limitations in compilation or execution of the conversion programs. Here, we present a workflow that addresses the above intricacies all the while attempting to maintain portability in implementation. This workflow is contructed in the form of a Bash script and - with the geoscience-oriented in mind - re-presented as a Jupyter notebook. First, one may choose whether this executable will run with POSIX-compliance or with a preference towards the Bash facilities, both widely adopted by operating systems. In the same vein, it attempts to function within minimal command environments, which reduces any dependencies. Finally, it is designed to offer parallelism across as many cores and nodes as necessary or as few as desired, whether upon a personal or super-computer. Underlying this workflow are patches such that antiquated tools may compile and execute upon modern hardware. Also, fixes to long-standing bugs and limitations in the existing MF converters have been prepared. Specifically, support for the conversion of -96- and Horizontal Flow Barrier-coupled simulations has been added. More radically, we have laid the foundations of a conversion utility between MF and a similar modeller, ParFlow. Furthermore, the modular approach followed may extend to an application which inter-operates between arbitrary groundwater simulators. In short, an accessible and portable workflow of the process of up-conversion between MODFLOW versions now avails itself to geoscientists. Updated programs within it may allow for re-use, in whole or in part, legacy simulations. Lastly, a generic inter-operator has been established, invoking the possibility of significant ease in the recycling of groundwater data in the future.
Synthetic biology: programming cells for biomedical applications.
Hörner, Maximilian; Reischmann, Nadine; Weber, Wilfried
2012-01-01
The emerging field of synthetic biology is a novel biological discipline at the interface between traditional biology, chemistry, and engineering sciences. Synthetic biology aims at the rational design of complex synthetic biological devices and systems with desired properties by combining compatible, modular biological parts in a systematic manner. While the first engineered systems were mainly proof-of-principle studies to demonstrate the power of the modular engineering approach of synthetic biology, subsequent systems focus on applications in the health, environmental, and energy sectors. This review describes recent approaches for biomedical applications that were developed along the synthetic biology design hierarchy, at the level of individual parts, of devices, and of complex multicellular systems. It describes how synthetic biological parts can be used for the synthesis of drug-delivery tools, how synthetic biological devices can facilitate the discovery of novel drugs, and how multicellular synthetic ecosystems can give insight into population dynamics of parasites and hosts. These examples demonstrate how this new discipline could contribute to novel solutions in the biopharmaceutical industry.
NASA Astrophysics Data System (ADS)
Ammann, C. M.; Vigh, J. L.; Lee, J. A.
2016-12-01
Society's growing needs for robust and relevant climate information have fostered an explosion in tools and frameworks for processing climate projections. Many top-down workflows might be employed to generate sets of pre-computed data and plots, frequently served in a "loading-dock style" through a metadata-enabled search and discovery engine. Despite these increasing resources, the diverse needs of applications-driven projects often result in data processing workflow requirements that cannot be fully satisfied using past approaches. In parallel to the data processing challenges, the provision of climate information to users in a form that is also usable represents a formidable challenge of its own. Finally, many users do not have the time nor the desire to synthesize and distill massive volumes of climate information to find the relevant information for their particular application. All of these considerations call for new approaches to developing actionable climate information. CRMe seeks to bridge the gap between the diversity and richness of bottom-up needs of practitioners, with discrete, structured top-down workflows typically implemented for rapid delivery. Additionally, CRMe has implemented web-based data services capable of providing focused climate information in usable form for a given location, or as spatially aggregated information for entire regions or countries following the needs of users and sectors. Making climate data actionable also involves summarizing and presenting it in concise and approachable ways. CRMe is developing the concept of dashboards, co-developed with the users, to condense the key information into a quick summary of the most relevant, curated climate data for a given discipline, application, or location, while still enabling users to efficiently conduct deeper discovery into rich datasets on an as-needed basis.
Reichelt, Wieland N; Haas, Florian; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
Microbial bioprocesses need to be designed to be transferable from lab scale to production scale as well as between setups. Although substantial effort is invested to control technological parameters, usually the only true constant parameter is the actual producer of the product: the cell. Hence, instead of solely controlling technological process parameters, the focus should be increasingly laid on physiological parameters. This contribution aims at illustrating a workflow of data life cycle management with special focus on physiology. Information processing condenses the data into physiological variables, while information mining condenses the variables further into physiological descriptors. This basis facilitates data analysis for a physiological explanation for observed phenomena in productivity. Targeting transferability, we demonstrate this workflow using an industrially relevant Escherichia coli process for recombinant protein production and substantiate the following three points: (1) The postinduction phase is independent in terms of productivity and physiology from the preinduction variables specific growth rate and biomass at induction. (2) The specific substrate uptake rate during induction phase was found to significantly impact the maximum specific product titer. (3) The time point of maximum specific titer can be predicted by an easy accessible physiological variable: while the maximum specific titers were reached at different time points (19.8 ± 7.6 h), those maxima were reached all within a very narrow window of cumulatively consumed substrate dSn (3.1 ± 0.3 g/g). Concluding, this contribution provides a workflow on how to gain a physiological view on the process and illustrates potential benefits. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:261-270, 2017. © 2016 American Institute of Chemical Engineers.
9th Annual Systems Engineering Conference: Volume 4 Thursday
2006-10-26
Connectivity, Speed, Volume • Enterprise application integration • Workflow integration or multi-media • Federated search capability • Link analysis and...categorization, federated search & automated discovery of information — Collaborative tools to quickly share relevant information Built on commercial
Avco Lycoming QCGAT program design cycle, demonstrated performance and emissions
NASA Technical Reports Server (NTRS)
Fogel, P.; Koschier, A.
1980-01-01
A high bypass ratio, twin spool turbofan engine of modular design which incorporates a front fan module driven by a modified LTS101 core engine was tested. The engine is housed in a nacelle incorporating full length fan ducting with sound treatment in both the inlet and fan discharge flow paths. Design goals of components and results of component tests are presented together with full engine test results. The rationale behind the combustor design selected for the engine is presented as well as the emissions test results. Total system (engine and nacelle) test results are included.
Army Engineers in a Joint and Multinational Environment
2008-05-22
operatio 0 in a maneuver battalion in national ns. The battalion also lacked...construction management section (CMS) to fulfill these requirements and provide operatio mission guidance for the multinational units. The CMS, led by a lieuten...also deleting from the inventory the engineer group headquarters, relying 115 Andrew Feickert, “U.S. Army’s Modular Redesign: Issues for Congress
1993-08-20
UNLIMITED. SYSTEMS ENGINEERING DIVISION AERONAUTICAL SYSTEMS CENTER AIR FORCE MATERIEL COMMAND WRIGHT PATTERSON AFB OH 45433-7126 YOITCE When Government...BASINGER Progatl anager Team Leader Special Programs Divsion Special Programs Division JAMES J. O’CONNELL Chief, Systems Engineering Division Training...ADDRESS(ES) 10. SPONSORING/ MONITORING AGENCY REPORT NUMBER Aeronautical Systems Center Systems Engineering Division ASC-TR-94-50 10 Bldg 11 2240 B St
ASaiM: a Galaxy-based framework to analyze microbiota data.
Batut, Bérénice; Gravouil, Kévin; Defois, Clémence; Hiltemann, Saskia; Brugère, Jean-François; Peyretaillade, Eric; Peyret, Pierre
2018-05-22
New generations of sequencing platforms coupled to numerous bioinformatics tools has led to rapid technological progress in metagenomics and metatranscriptomics to investigate complex microorganism communities. Nevertheless, a combination of different bioinformatic tools remains necessary to draw conclusions out of microbiota studies. Modular and user-friendly tools would greatly improve such studies. We therefore developed ASaiM, an Open-Source Galaxy-based framework dedicated to microbiota data analyses. ASaiM provides an extensive collection of tools to assemble, extract, explore and visualize microbiota information from raw metataxonomic, metagenomic or metatranscriptomic sequences. To guide the analyses, several customizable workflows are included and are supported by tutorials and Galaxy interactive tours, which guide users through the analyses step by step. ASaiM is implemented as a Galaxy Docker flavour. It is scalable to thousands of datasets, but also can be used on a normal PC. The associated source code is available under Apache 2 license at https://github.com/ASaiM/framework and documentation can be found online (http://asaim.readthedocs.io). Based on the Galaxy framework, ASaiM offers a sophisticated environment with a variety of tools, workflows, documentation and training to scientists working on complex microorganism communities. It makes analysis and exploration analyses of microbiota data easy, quick, transparent, reproducible and shareable.
PGen: large-scale genomic variations analysis workflow and browser in SoyKB.
Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti
2016-10-06
With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most efficient analysis of soybean data using thorough testing and validation. This research serves as an example of best practices for development of genomics data analysis workflows by integrating remote HPC resources and efficient data management with ease of use for biological users. PGen workflow can also be easily customized for analysis of data in other species.
Distributed Data Integration Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Critchlow, T; Ludaescher, B; Vouk, M
The Internet is becoming the preferred method for disseminating scientific data from a variety of disciplines. This can result in information overload on the part of the scientists, who are unable to query all of the relevant sources, even if they knew where to find them, what they contained, how to interact with them, and how to interpret the results. A related issue is keeping up with current trends in information technology often taxes the end-user's expertise and time. Thus instead of benefiting from this information rich environment, scientists become experts on a small number of sources and technologies, usemore » them almost exclusively, and develop a resistance to innovations that can enhance their productivity. Enabling information based scientific advances, in domains such as functional genomics, requires fully utilizing all available information and the latest technologies. In order to address this problem we are developing a end-user centric, domain-sensitive workflow-based infrastructure, shown in Figure 1, that will allow scientists to design complex scientific workflows that reflect the data manipulation required to perform their research without an undue burden. We are taking a three-tiered approach to designing this infrastructure utilizing (1) abstract workflow definition, construction, and automatic deployment, (2) complex agent-based workflow execution and (3) automatic wrapper generation. In order to construct a workflow, the scientist defines an abstract workflow (AWF) in terminology (semantics and context) that is familiar to him/her. This AWF includes all of the data transformations, selections, and analyses required by the scientist, but does not necessarily specify particular data sources. This abstract workflow is then compiled into an executable workflow (EWF, in our case XPDL) that is then evaluated and executed by the workflow engine. This EWF contains references to specific data source and interfaces capable of performing the desired actions. In order to provide access to the largest number of resources possible, our lowest level utilizes automatic wrapper generation techniques to create information and data wrappers capable of interacting with the complex interfaces typical in scientific analysis. The remainder of this document outlines our work in these three areas, the impact our work has made, and our plans for the future.« less
Lim, Natalie Y. N.; Roco, Constance A.; Frostegård, Åsa
2016-01-01
Adequate comparisons of DNA and cDNA libraries from complex environments require methods for co-extraction of DNA and RNA due to the inherent heterogeneity of such samples, or risk bias caused by variations in lysis and extraction efficiencies. Still, there are few methods and kits allowing simultaneous extraction of DNA and RNA from the same sample, and the existing ones generally require optimization. The proprietary nature of kit components, however, makes modifications of individual steps in the manufacturer’s recommended procedure difficult. Surprisingly, enzymatic treatments are often performed before purification procedures are complete, which we have identified here as a major problem when seeking efficient genomic DNA removal from RNA extracts. Here, we tested several DNA/RNA co-extraction commercial kits on inhibitor-rich soils, and compared them to a commonly used phenol-chloroform co-extraction method. Since none of the kits/methods co-extracted high-quality nucleic acid material, we optimized the extraction workflow by introducing small but important improvements. In particular, we illustrate the need for extensive purification prior to all enzymatic procedures, with special focus on the DNase digestion step in RNA extraction. These adjustments led to the removal of enzymatic inhibition in RNA extracts and made it possible to reduce genomic DNA to below detectable levels as determined by quantitative PCR. Notably, we confirmed that DNase digestion may not be uniform in replicate extraction reactions, thus the analysis of “representative samples” is insufficient. The modular nature of our workflow protocol allows optimization of individual steps. It also increases focus on additional purification procedures prior to enzymatic processes, in particular DNases, yielding genomic DNA-free RNA extracts suitable for metatranscriptomic analysis. PMID:27803690
Toth, Robert J.; Shih, Natalie; Tomaszewski, John E.; Feldman, Michael D.; Kutter, Oliver; Yu, Daphne N.; Paulus, John C.; Paladini, Ginaluca; Madabhushi, Anant
2014-01-01
Context: Co-registration of ex-vivo histologic images with pre-operative imaging (e.g., magnetic resonance imaging [MRI]) can be used to align and map disease extent, and to identify quantitative imaging signatures. However, ex-vivo histology images are frequently sectioned into quarters prior to imaging. Aims: This work presents Histostitcher™, a software system designed to create a pseudo whole mount histology section (WMHS) from a stitching of four individual histology quadrant images. Materials and Methods: Histostitcher™ uses user-identified fiducials on the boundary of two quadrants to stitch such quadrants. An original prototype of Histostitcher™ was designed using the Matlab programming languages. However, clinical use was limited due to slow performance, computer memory constraints and an inefficient workflow. The latest version was created using the extensible imaging platform (XIP™) architecture in the C++ programming language. A fast, graphics processor unit renderer was designed to intelligently cache the visible parts of the histology quadrants and the workflow was significantly improved to allow modifying existing fiducials, fast transformations of the quadrants and saving/loading sessions. Results: The new stitching platform yielded significantly more efficient workflow and reconstruction than the previous prototype. It was tested on a traditional desktop computer, a Windows 8 Surface Pro table device and a 27 inch multi-touch display, with little performance difference between the different devices. Conclusions: Histostitcher™ is a fast, efficient framework for reconstructing pseudo WMHS from individually imaged quadrants. The highly modular XIP™ framework was used to develop an intuitive interface and future work will entail mapping the disease extent from the pseudo WMHS onto pre-operative MRI. PMID:24843820
Lim, Hyun Gyu; Lim, Jae Hyung; Jung, Gyoo Yeol
2015-01-01
Refactoring microorganisms for efficient production of advanced biofuel such as n-butanol from a mixture of sugars in the cheap feedstock is a prerequisite to achieve economic feasibility in biorefinery. However, production of biofuel from inedible and cheap feedstock is highly challenging due to the slower utilization of biomass-driven sugars, arising from complex assimilation pathway, difficulties in amplification of biosynthetic pathways for heterologous metabolite, and redox imbalance caused by consuming intracellular reducing power to produce quite reduced biofuel. Even with these problems, the microorganisms should show robust production of biofuel to obtain industrial feasibility. Thus, refactoring microorganisms for efficient conversion is highly desirable in biofuel production. In this study, we engineered robust Escherichia coli to accomplish high production of n-butanol from galactose-glucose mixtures via the design of modular pathway, an efficient and systematic way, to reconstruct the entire metabolic pathway with many target genes. Three modular pathways designed using the predictable genetic elements were assembled for efficient galactose utilization, n-butanol production, and redox re-balancing to robustly produce n-butanol from a sugar mixture of galactose and glucose. Specifically, the engineered strain showed dramatically increased n-butanol production (3.3-fold increased to 6.2 g/L after 48-h fermentation) compared to the parental strain (1.9 g/L) in galactose-supplemented medium. Moreover, fermentation with mixtures of galactose and glucose at various ratios from 2:1 to 1:2 confirmed that our engineered strain was able to robustly produce n-butanol regardless of sugar composition with simultaneous utilization of galactose and glucose. Collectively, modular pathway engineering of metabolic network can be an effective approach in strain development for optimal biofuel production with cost-effective fermentable sugars. To the best of our knowledge, this study demonstrated the first and highest n-butanol production from galactose in E. coli. Moreover, robust production of n-butanol with sugar mixtures with variable composition would facilitate the economic feasibility of the microbial process using a mixture of sugars from cheap biomass in the near future.
Systematic engineering of pentose phosphate pathway improves Escherichia coli succinate production.
Tan, Zaigao; Chen, Jing; Zhang, Xueli
2016-01-01
Succinate biosynthesis of Escherichia coli is reducing equivalent-dependent and the EMP pathway serves as the primary reducing equivalent source under anaerobic condition. Compared with EMP, pentose phosphate pathway (PPP) is reducing equivalent-conserving but suffers from low efficacy. In this study, the ribosome binding site library and modified multivariate modular metabolic engineering (MMME) approaches are employed to overcome the low efficacy of PPP and thus increase succinate production. Altering expression levels of different PPP enzymes have distinct effects on succinate production. Specifically, increased expression of five enzymes, i.e., Zwf, Pgl, Gnd, Tkt, and Tal, contributes to increased succinate production, while the increased expression of two enzymes, i.e., Rpe and Rpi, significantly decreases succinate production. Modular engineering strategy is employed to decompose PPP into three modules according to position and function. Engineering of Zwf/Pgl/Gnd and Tkt/Tal modules effectively increases succinate yield and production, while engineering of Rpe/Rpi module decreases. Imbalance of enzymatic reactions in PPP is alleviated using MMME approach. Finally, combinational utilization of engineered PPP and SthA transhydrogenase enables succinate yield up to 1.61 mol/mol glucose, which is 94% of theoretical maximum yield (1.71 mol/mol) and also the highest succinate yield in minimal medium to our knowledge. In summary, we systematically engineered the PPP for improving the supply of reducing equivalents and thus succinate production. Besides succinate, these PPP engineering strategies and conclusions can also be applicable to the production of other reducing equivalent-dependent biorenewables.
Preliminary development of an advanced modular pressure relief cushion: Testing and user evaluation.
Freeto, Tyler; Mitchell, Steven J; Bogie, Kath M
2018-02-01
Effective pressure relief cushions are identified as a core assistive technology need by the World Health Organization Global Cooperation on Assistive Technology. High quality affordable wheelchair cushions could provide effective pressure relief for many individuals with limited access to advanced assistive technology. Value driven engineering (VdE) principles were employed to develop a prototype modular cushion. Low cost dynamically responsive gel balls were arranged in a close packed array and seated in bilayer foam for containment and support. Two modular cushions, one with high compliance balls and one with moderate compliance balls were compared with High Profile and Low Profile Roho ® and Jay ® Medical 2 cushions. ISO 16480-2 biomechanical standardized tests were applied to assess cushion performance. A preliminary materials cost analysis was carried out. A prototype modular cushion was evaluated by 12 participants who reported satisfaction using a questionnaire based on the Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0) instrument. Overall the modular cushions performed better than, or on par with, the most widely prescribed commercially available cushions under ISO 16480-2 testing. Users rated the modular cushion highly for overall appearance, size and dimensions, comfort, safety, stability, ease of adjustment and general ease of use. Cost-analysis indicated that every modular cushion component a could be replaced several times and still maintain cost-efficacy over the complete cushion lifecycle. A VdE modular cushion has the potential provide effective pressure relief for many users at a low lifetime cost. Copyright © 2017. Published by Elsevier Ltd.
A modular modulation method for achieving increases in metabolite production.
Acerenza, Luis; Monzon, Pablo; Ortega, Fernando
2015-01-01
Increasing the production of overproducing strains represents a great challenge. Here, we develop a modular modulation method to determine the key steps for genetic manipulation to increase metabolite production. The method consists of three steps: (i) modularization of the metabolic network into two modules connected by linking metabolites, (ii) change in the activity of the modules using auxiliary rates producing or consuming the linking metabolites in appropriate proportions and (iii) determination of the key modules and steps to increase production. The mathematical formulation of the method in matrix form shows that it may be applied to metabolic networks of any structure and size, with reactions showing any kind of rate laws. The results are valid for any type of conservation relationships in the metabolite concentrations or interactions between modules. The activity of the module may, in principle, be changed by any large factor. The method may be applied recursively or combined with other methods devised to perform fine searches in smaller regions. In practice, it is implemented by integrating to the producer strain heterologous reactions or synthetic pathways producing or consuming the linking metabolites. The new procedure may contribute to develop metabolic engineering into a more systematic practice. © 2015 American Institute of Chemical Engineers.
PATHA: Performance Analysis Tool for HPC Applications
Yoo, Wucherl; Koo, Michelle; Cao, Yi; ...
2016-02-18
Large science projects rely on complex workflows to analyze terabytes or petabytes of data. These jobs are often running over thousands of CPU cores and simultaneously performing data accesses, data movements, and computation. It is difficult to identify bottlenecks or to debug the performance issues in these large workflows. In order to address these challenges, we have developed Performance Analysis Tool for HPC Applications (PATHA) using the state-of-art open source big data processing tools. Our framework can ingest system logs to extract key performance measures, and apply the most sophisticated statistical tools and data mining methods on the performance data.more » Furthermore, it utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of PATHA, we conduct a case study on the workflows from an astronomy project known as the Palomar Transient Factory (PTF). This study processed 1.6 TB of system logs collected on the NERSC supercomputer Edison. Using PATHA, we were able to identify performance bottlenecks, which reside in three tasks of PTF workflow with the dependency on the density of celestial objects.« less
Formalizing an integrative, multidisciplinary cancer therapy discovery workflow
McGuire, Mary F.; Enderling, Heiko; Wallace, Dorothy I.; Batra, Jaspreet; Jordan, Marie; Kumar, Sushil; Panetta, John C.; Pasquier, Eddy
2014-01-01
Although many clinicians and researchers work to understand cancer, there has been limited success to effectively combine forces and collaborate over time, distance, data and budget constraints. Here we present a workflow template for multidisciplinary cancer therapy that was developed during the 2nd Annual Workshop on Cancer Systems Biology sponsored by Tufts University, Boston, MA in July 2012. The template was applied to the development of a metronomic therapy backbone for neuroblastoma. Three primary groups were identified: clinicians, biologists, and scientists (mathematicians, computer scientists, physicists and engineers). The workflow described their integrative interactions; parallel or sequential processes; data sources and computational tools at different stages as well as the iterative nature of therapeutic development from clinical observations to in vitro, in vivo, and clinical trials. We found that theoreticians in dialog with experimentalists could develop calibrated and parameterized predictive models that inform and formalize sets of testable hypotheses, thus speeding up discovery and validation while reducing laboratory resources and costs. The developed template outlines an interdisciplinary collaboration workflow designed to systematically investigate the mechanistic underpinnings of a new therapy and validate that therapy to advance development and clinical acceptance. PMID:23955390
NASA Astrophysics Data System (ADS)
Oztekin, Halit; Temurtas, Feyzullah; Gulbag, Ali
The Arithmetic and Logic Unit (ALU) design is one of the important topics in Computer Architecture and Organization course in Computer and Electrical Engineering departments. There are ALU designs that have non-modular nature to be used as an educational tool. As the programmable logic technology has developed rapidly, it is feasible that ALU design based on Field Programmable Gate Array (FPGA) is implemented in this course. In this paper, we have adopted the modular approach to ALU design based on FPGA. All the modules in the ALU design are realized using schematic structure on Altera's Cyclone II Development board. Under this model, the ALU content is divided into four distinct modules. These are arithmetic unit except for multiplication and division operations, logic unit, multiplication unit and division unit. User can easily design any size of ALU unit since this approach has the modular nature. Then, this approach was applied to microcomputer architecture design named BZK.SAU.FPGA10.0 instead of the current ALU unit.
Photoreactive elastin-like proteins for use as versatile bioactive materials and surface coatings
Raphel, Jordan; Parisi-Amon, Andreina; Heilshorn, Sarah
2012-01-01
Photocrosslinkable, protein-engineered biomaterials combine a rapid, controllable, cytocompatible crosslinking method with a modular design strategy to create a new family of bioactive materials. These materials have a wide range of biomedical applications, including the development of bioactive implant coatings, drug delivery vehicles, and tissue engineering scaffolds. We present the successful functionalization of a bioactive elastin-like protein with photoreactive diazirine moieties. Scalable synthesis is achieved using a standard recombinant protein expression host followed by site-specific modification of lysine residues with a heterobifunctional N-hydroxysuccinimide ester-diazirine crosslinker. The resulting biomaterial is demonstrated to be processable by spin coating, drop casting, soft lithographic patterning, and mold casting to fabricate a variety of two- and three-dimensional photocrosslinked biomaterials with length scales spanning the nanometer to millimeter range. Protein thin films proved to be highly stable over a three-week period. Cell-adhesive functional domains incorporated into the engineered protein materials were shown to remain active post-photo-processing. Human adipose-derived stem cells achieved faster rates of cell adhesion and larger spread areas on thin films of the engineered protein compared to control substrates. The ease and scalability of material production, processing versatility, and modular bioactive functionality make this recombinantly engineered protein an ideal candidate for the development of novel biomaterial coatings, films, and scaffolds. PMID:23015764
Photoreactive elastin-like proteins for use as versatile bioactive materials and surface coatings.
Raphel, Jordan; Parisi-Amon, Andreina; Heilshorn, Sarah
2012-10-07
Photocrosslinkable, protein-engineered biomaterials combine a rapid, controllable, cytocompatible crosslinking method with a modular design strategy to create a new family of bioactive materials. These materials have a wide range of biomedical applications, including the development of bioactive implant coatings, drug delivery vehicles, and tissue engineering scaffolds. We present the successful functionalization of a bioactive elastin-like protein with photoreactive diazirine moieties. Scalable synthesis is achieved using a standard recombinant protein expression host followed by site-specific modification of lysine residues with a heterobifunctional N-hydroxysuccinimide ester-diazirine crosslinker. The resulting biomaterial is demonstrated to be processable by spin coating, drop casting, soft lithographic patterning, and mold casting to fabricate a variety of two- and three-dimensional photocrosslinked biomaterials with length scales spanning the nanometer to millimeter range. Protein thin films proved to be highly stable over a three-week period. Cell-adhesive functional domains incorporated into the engineered protein materials were shown to remain active post-photo-processing. Human adipose-derived stem cells achieved faster rates of cell adhesion and larger spread areas on thin films of the engineered protein compared to control substrates. The ease and scalability of material production, processing versatility, and modular bioactive functionality make this recombinantly engineered protein an ideal candidate for the development of novel biomaterial coatings, films, and scaffolds.
Agapakis, Christina M; Silver, Pamela A
2009-07-01
Synthetic biology has been used to describe many biological endeavors over the past thirty years--from designing enzymes and in vitro systems, to manipulating existing metabolisms and gene expression, to creating entirely synthetic replicating life forms. What separates the current incarnation of synthetic biology from the recombinant DNA technology or metabolic engineering of the past is an emphasis on principles from engineering such as modularity, standardization, and rigorously predictive models. As such, synthetic biology represents a new paradigm for learning about and using biological molecules and data, with applications in basic science, biotechnology, and medicine. This review covers the canonical examples as well as some recent advances in synthetic biology in terms of what we know and what we can learn about the networks underlying biology, and how this endeavor may shape our understanding of living systems.
CD-based image archival and management on a hybrid radiology intranet.
Cox, R D; Henri, C J; Bret, P M
1997-08-01
This article describes the design and implementation of a low-cost image archival and management solution on a radiology network consisting of UNIX, IBM personal computer-compatible (IBM, Purchase, NY) and Macintosh (Apple Computer, Cupertino, CA) workstations. The picture archiving and communications system (PACS) is modular, scaleable and conforms to the Digital Imaging and Communications in Medicine (DICOM) 3.0 standard for image transfer, storage and retrieval. Image data is made available on soft-copy reporting workstations by a work-flow management scheme and on desktop computers through a World Wide Web (WWW) interface. Data archival is based on recordable compact disc (CD) technology and is automated. The project has allowed the radiology department to eliminate the use of film in magnetic resonance (MR) imaging, computed tomography (CT) and ultrasonography.
ERIC Educational Resources Information Center
Schlenker, Richard M.; And Others
Presented is a manuscript for an introductory boiler water chemistry course for marine engineer education. The course is modular, self-paced, audio-tutorial, contract graded and combined lecture-laboratory instructed. Lectures are presented to students individually via audio-tapes and 35 mm slides. The course consists of a total of 17 modules -…
Concepts for Developing and Utilizing Crowdsourcing for Neurotechnology Advancement
2013-05-01
understanding of brain function and related neuroimaging tools, which is mostly limited to highly trained neuroscientists and engineers who wish to...Included are some programmatic suggestions, as well as exemplar applications to fit this end goal. 15. SUBJECT TERMS modular, EEG, neuroscience ... neuroscience -related problems among professionals in other fields, such as engineering and computer science, utilizing this approach to inspire true
Transition in Gas Turbine Engine Control System Architecture: Modular, Distributed, Embedded
2009-08-01
Design + Development + Certification + Procurement + Life Cycle Cost = Net Savings for our Customers Approved for Public Release 16 Economic ...Supporting Small Quantity Electronics Need Broadly Applicable High Temperature Electronics Supply Base Approved for Public Release 17 Economic ...rc ec ures Approved for Public Release 18 Economic Drivers for New FADEC Designs FADEC Implementation Time Pacing Engine Development Issues • FADEC
Mechanical-Kinetic Modeling of a Molecular Walker from a Modular Design Principle
NASA Astrophysics Data System (ADS)
Hou, Ruizheng; Loh, Iong Ying; Li, Hongrong; Wang, Zhisong
2017-02-01
Artificial molecular walkers beyond burnt-bridge designs are complex nanomachines that potentially replicate biological walkers in mechanisms and functionalities. Improving the man-made walkers up to performance for widespread applications remains difficult, largely because their biomimetic design principles involve entangled kinetic and mechanical effects to complicate the link between a walker's construction and ultimate performance. Here, a synergic mechanical-kinetic model is developed for a recently reported DNA bipedal walker, which is based on a modular design principle, potentially enabling many directional walkers driven by a length-switching engine. The model reproduces the experimental data of the walker, and identifies its performance-limiting factors. The model also captures features common to the underlying design principle, including counterintuitive performance-construction relations that are explained by detailed balance, entropy production, and bias cancellation. While indicating a low directional fidelity for the present walker, the model suggests the possibility of improving the fidelity above 90% by a more powerful engine, which may be an improved version of the present engine or an entirely new engine motif, thanks to the flexible design principle. The model is readily adaptable to aid these experimental developments towards high-performance molecular walkers.
Long-read sequencing data analysis for yeasts.
Yue, Jia-Xing; Liti, Gianni
2018-06-01
Long-read sequencing technologies have become increasingly popular due to their strengths in resolving complex genomic regions. As a leading model organism with small genome size and great biotechnological importance, the budding yeast Saccharomyces cerevisiae has many isolates currently being sequenced with long reads. However, analyzing long-read sequencing data to produce high-quality genome assembly and annotation remains challenging. Here, we present a modular computational framework named long-read sequencing data analysis for yeasts (LRSDAY), the first one-stop solution that streamlines this process. Starting from the raw sequencing reads, LRSDAY can produce chromosome-level genome assembly and comprehensive genome annotation in a highly automated manner with minimal manual intervention, which is not possible using any alternative tool available to date. The annotated genomic features include centromeres, protein-coding genes, tRNAs, transposable elements (TEs), and telomere-associated elements. Although tailored for S. cerevisiae, we designed LRSDAY to be highly modular and customizable, making it adaptable to virtually any eukaryotic organism. When applying LRSDAY to an S. cerevisiae strain, it takes ∼41 h to generate a complete and well-annotated genome from ∼100× Pacific Biosciences (PacBio) running the basic workflow with four threads. Basic experience working within the Linux command-line environment is recommended for carrying out the analysis using LRSDAY.
Parts plus pipes: synthetic biology approaches to metabolic engineering
Boyle, Patrick M.; Silver, Pamela A.
2011-01-01
Synthetic biologists combine modular biological “parts” to create higher-order devices. Metabolic engineers construct biological “pipes” by optimizing the microbial conversion of basic substrates to desired compounds. Many scientists work at the intersection of these two philosophies, employing synthetic devices to enhance metabolic engineering efforts. These integrated approaches promise to do more than simply improve product yields; they can expand the array of products that are tractable to produce biologically. In this review, we explore the application of synthetic biology techniques to next-generation metabolic engineering challenges, as well as the emerging engineering principles for biological design. PMID:22037345
Advancing Metabolic Engineering of Saccharomyces cerevisiae Using the CRISPR/Cas System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jiazhang; HamediRad, Mohammad; Zhao, Huimin
Thanks to its ease of use, modularity, and scalability, the clustered regularly interspaced short palindromic repeats (CRISPR) system has been increasingly used in the design and engineering of Saccharomyces cerevisiae, one of the most popular hosts for industrial biotechnology. This review summarizes the recent development of this disruptive technology for metabolic engineering applications, including CRISPR-mediated gene knock-out and knock-in as well as transcriptional activation and interference. More importantly, multi-functional CRISPR systems that combine both gain- and loss-of-function modulations for combinatorial metabolic engineering are highlighted.
Advancing Metabolic Engineering of Saccharomyces cerevisiae Using the CRISPR/Cas System
Lian, Jiazhang; HamediRad, Mohammad; Zhao, Huimin
2018-04-18
Thanks to its ease of use, modularity, and scalability, the clustered regularly interspaced short palindromic repeats (CRISPR) system has been increasingly used in the design and engineering of Saccharomyces cerevisiae, one of the most popular hosts for industrial biotechnology. This review summarizes the recent development of this disruptive technology for metabolic engineering applications, including CRISPR-mediated gene knock-out and knock-in as well as transcriptional activation and interference. More importantly, multi-functional CRISPR systems that combine both gain- and loss-of-function modulations for combinatorial metabolic engineering are highlighted.
MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.
Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y
2018-01-02
Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .
Latimer, Luke N; Dueber, John E
2017-06-01
A common challenge in metabolic engineering is rapidly identifying rate-controlling enzymes in heterologous pathways for subsequent production improvement. We demonstrate a workflow to address this challenge and apply it to improving xylose utilization in Saccharomyces cerevisiae. For eight reactions required for conversion of xylose to ethanol, we screened enzymes for functional expression in S. cerevisiae, followed by a combinatorial expression analysis to achieve pathway flux balancing and identification of limiting enzymatic activities. In the next round of strain engineering, we increased the copy number of these limiting enzymes and again tested the eight-enzyme combinatorial expression library in this new background. This workflow yielded a strain that has a ∼70% increase in biomass yield and ∼240% increase in xylose utilization. Finally, we chromosomally integrated the expression library. This library enriched for strains with multiple integrations of the pathway, which likely were the result of tandem integrations mediated by promoter homology. Biotechnol. Bioeng. 2017;114: 1301-1309. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Engineering Documentation and Data Control
NASA Technical Reports Server (NTRS)
Matteson, Michael J.; Bramley, Craig; Ciaruffoli, Veronica
2001-01-01
Mississippi Space Services (MSS) the facility services contractor for NASA's John C. Stennis Space Center (SSC), is utilizing technology to improve engineering documentation and data control. Two identified improvement areas, labor intensive documentation research and outdated drafting standards, were targeted as top priority. MSS selected AutoManager(R) WorkFlow from Cyco software to manage engineering documentation. The software is currently installed on over 150 desctops. The outdated SSC drafting standard was written for pre-CADD drafting methods, in other words, board drafting. Implementation of COTS software solutions to manage engineering documentation and update the drafting standard resulted in significant increases in productivity by reducing the time spent searching for documents.
Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual
NASA Technical Reports Server (NTRS)
Hamil, R. G.; Ferden, S. L.
1977-01-01
The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.
Engineering Design Handbook: Timing Systems and Components
1975-12-01
23-1 23-2 Modular Components 23-2 23—3 Integrated Circuits 23—2 23—4 Matching Techniques 23-5 23-5 DC and AC Systems 23-7 23-6 Hybrid...Assembly Illustrating Modular Design . . 23—4 23-3 Characteristics of the Source 23—6 23—4 Characteristics of the Load 23—6 23—5 Matching Source and...4-1 INTRODUCTION There is a continuous demand for increased precision and accuracy in frequency control. Today fast time pulses are used in
NASA Technical Reports Server (NTRS)
Studor, George
2010-01-01
The presentation reviews what is meant by the term 'fly-by-wireless', common problems and motivation, provides recent examples, and examines NASA's future and basis for collaboration. The vision is to minimize cables and connectors and increase functionality across the aerospace industry by providing reliable, lower cost, modular, and higher performance alternatives to wired data connectivity to benefit the entire vehicle/program life-cycle. Focus areas are system engineering and integration methods to reduce cables and connectors, vehicle provisions for modularity and accessibility, and a 'tool box' of alternatives to wired connectivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuzawa, Satoshi; Keasling, Jay D.; Katz, Leonard
Complex polyketides comprise a large number of natural products that have broad application in medicine and agriculture. They are produced in bacteria and fungi from large enzyme complexes named type I modular polyketide synthases (PKSs) that are composed of multifunctional polypeptides containing discrete enzymatic domains organized into modules. The modular nature of PKSs has enabled a multitude of efforts to engineer the PKS genes to produce novel polyketides of predicted structure. Finally, we have repurposed PKSs to produce a number of short-chain mono- and di-carboxylic acids and ketones that could have applications as fuels or industrial chemicals.
Liu, Yanfeng; Shin, Hyun-dong; Li, Jianghua; Liu, Long
2015-02-01
Metabolic engineering facilitates the rational development of recombinant bacterial strains for metabolite overproduction. Building on enormous advances in system biology and synthetic biology, novel strategies have been established for multivariate optimization of metabolic networks in ensemble, spatial, and dynamic manners such as modular pathway engineering, compartmentalization metabolic engineering, and metabolic engineering guided by genome-scale metabolic models, in vitro reconstitution, and systems and synthetic biology. Herein, we summarize recent advances in novel metabolic engineering strategies. Combined with advancing kinetic models and synthetic biology tools, more efficient new strategies for improving cellular properties can be established and applied for industrially important biochemical production.
User's Guide for the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS): Version 2
NASA Technical Reports Server (NTRS)
Liu, Yuan; Frederick, Dean K.; DeCastro, Jonathan A.; Litt, Jonathan S.; Chan, William W.
2012-01-01
This report is a Users Guide for version 2 of the NASA-developed Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) software, which is a transient simulation of a large commercial turbofan engine (up to 90,000-lb thrust) with a realistic engine control system. The software supports easy access to health, control, and engine parameters through a graphical user interface (GUI). C-MAPSS v.2 has some enhancements over the original, including three actuators rather than one, the addition of actuator and sensor dynamics, and an improved controller, while retaining or improving on the convenience and user-friendliness of the original. C-MAPSS v.2 provides the user with a graphical turbofan engine simulation environment in which advanced algorithms can be implemented and tested. C-MAPSS can run user-specified transient simulations, and it can generate state-space linear models of the nonlinear engine model at an operating point. The code has a number of GUI screens that allow point-and-click operation, and have editable fields for user-specified input. The software includes an atmospheric model which allows simulation of engine operation at altitudes from sea level to 40,000 ft, Mach numbers from 0 to 0.90, and ambient temperatures from -60 to 103 F. The package also includes a power-management system that allows the engine to be operated over a wide range of thrust levels throughout the full range of flight conditions.
What Not To Do: Anti-patterns for Developing Scientific Workflow Software Components
NASA Astrophysics Data System (ADS)
Futrelle, J.; Maffei, A. R.; Sosik, H. M.; Gallager, S. M.; York, A.
2013-12-01
Scientific workflows promise to enable efficient scaling-up of researcher code to handle large datasets and workloads, as well as documentation of scientific processing via standardized provenance records, etc. Workflow systems and related frameworks for coordinating the execution of otherwise separate components are limited, however, in their ability to overcome software engineering design problems commonly encountered in pre-existing components, such as scripts developed externally by scientists in their laboratories. In practice, this often means that components must be rewritten or replaced in a time-consuming, expensive process. In the course of an extensive workflow development project involving large-scale oceanographic image processing, we have begun to identify and codify 'anti-patterns'--problematic design characteristics of software--that make components fit poorly into complex automated workflows. We have gone on to develop and document low-effort solutions and best practices that efficiently address the anti-patterns we have identified. The issues, solutions, and best practices can be used to evaluate and improve existing code, as well as guiding the development of new components. For example, we have identified a common anti-pattern we call 'batch-itis' in which a script fails and then cannot perform more work, even if that work is not precluded by the failure. The solution we have identified--removing unnecessary looping over independent units of work--is often easier to code than the anti-pattern, as it eliminates the need for complex control flow logic in the component. Other anti-patterns we have identified are similarly easy to identify and often easy to fix. We have drawn upon experience working with three science teams at Woods Hole Oceanographic Institution, each of which has designed novel imaging instruments and associated image analysis code. By developing use cases and prototypes within these teams, we have undertaken formal evaluations of software components developed by programmers with widely varying levels of expertise, and have been able to discover and characterize a number of anti-patterns. Our evaluation methodology and testbed have also enabled us to assess the efficacy of strategies to address these anti-patterns according to scientifically relevant metrics, such as ability of algorithms to perform faster than the rate of data acquisition and the accuracy of workflow component output relative to ground truth. The set of anti-patterns and solutions we have identified augments of the body of more well-known software engineering anti-patterns by addressing additional concerns that obtain when a software component has to function as part of a workflow assembled out of independently-developed codebases. Our experience shows that identifying and resolving these anti-patterns reduces development time and improves performance without reducing component reusability.
Informal Learning after Organizational Change
ERIC Educational Resources Information Center
Reardon, Robert F.
2004-01-01
This inductive, qualitative study investigates how learning took place among nine experienced engineers in an industrial setting after a major reorganization. A thematic analysis of the transcripts revealed that the learning was informal and that it fell into three distinct categories: learning new workflows, learning about the chemical process,…
ASCEM Data Brower (ASCEMDB) v0.8
DOE Office of Scientific and Technical Information (OSTI.GOV)
ROMOSAN, ALEXANDRU
Data management tool designed for the Advanced Simulation Capability for Environmental Management (ASCEM) framework. Distinguishing features of this gateway include: (1) handling of complex geometry data, (2) advance selection mechanism, (3) state of art rendering of spatiotemporal data records, and (4) seamless integration with a distributed workflow engine.
2012-05-16
Regional Command RCP Route Clearance Platoon RSOI Reception, Staging, Onward Movement, Integration SBCT Stryker Brigade Combat Team TOE Table of...Point (ASPs), and field hospital platforms; prepare river crossing sites; and support port repair due to Hydraulic Excavator (HYEX), provides force...platforms, FARPS, supply routes, roads, control points, fire bases, tank ditches, ASPs, and field hospital platforms; prepare river crossing sites; and
1978-12-01
not retumn it to the originator. Unclassified READ INSTRUCTIONSJ EPORT DOCUMENTATION PAGE BEFORE COMPLETING FORMJ. GOVT ACCESSr - ECIPIENT’S CATALOG...responsible for low engine performance of an installed engine. In the case of the T700, a study was completed in Nov. 197 7 which def8 nod an analytical...ECM study has been completed . T" e results of the various systems considered areo presented below. &sateim A - Az exponsive system foaturing in- X ~(X
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
Automated Engineering Design (AED); An approach to automated documentation
NASA Technical Reports Server (NTRS)
Mcclure, C. W.
1970-01-01
The automated engineering design (AED) is reviewed, consisting of a high level systems programming language, a series of modular precoded subroutines, and a set of powerful software machine tools that effectively automate the production and design of new languages. AED is used primarily for development of problem and user-oriented languages. Software production phases are diagramed, and factors which inhibit effective documentation are evaluated.
The Modular Clock Algorithm for Blind Rendezvous
2009-03-26
and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training...capabilities in spectrum management and particularly in harvesting unused portions of pre-allocated band- width under DSA. The term “cognitive radio” was...of rendezvous and our role as the waiter . However, if the “child” refuses to move from non-common spectrum, rendezvous cannot occur. Bluetooth
Modular Engine Noise Component Prediction System (MCP) Technical Description and Assessment Document
NASA Technical Reports Server (NTRS)
Herkes, William H.; Reed, David H.
2005-01-01
This report describes an empirical prediction procedure for turbofan engine noise. The procedure generates predicted noise levels for several noise components, including inlet- and aft-radiated fan noise, and jet-mixing noise. This report discusses the noise source mechanisms, the development of the prediction procedures, and the assessment of the accuracy of these predictions. Finally, some recommendations for future work are presented.
Modular assembly of synthetic proteins that span the plasma membrane in mammalian cells.
Qudrat, Anam; Truong, Kevin
2016-12-09
To achieve synthetic control over how a cell responds to other cells or the extracellular environment, it is important to reliably engineer proteins that can traffic and span the plasma membrane. Using a modular approach to assemble proteins, we identified the minimum necessary components required to engineer such membrane-spanning proteins with predictable orientation in mammalian cells. While a transmembrane domain (TM) fused to the N-terminus of a protein is sufficient to traffic it to the endoplasmic reticulum (ER), an additional signal peptidase cleavage site downstream of this TM enhanced sorting out of the ER. Next, a second TM in the synthetic protein helped anchor and accumulate the membrane-spanning protein on the plasma membrane. The orientation of the components of the synthetic protein were determined through measuring intracellular Ca 2+ signaling using the R-GECO biosensor and through measuring extracellular quenching of yellow fluorescent protein variants by saturating acidic and salt conditions. This work forms the basis of engineering novel proteins that span the plasma membrane to potentially control intracellular responses to extracellular conditions.
Integrated Control System Engineering Support.
1984-12-01
interference susceptibility. " Study multiplex bus loading requirements. Flight Control Software 0 " Demonstrate efficiencies of modular software and...Major technical thrusts include the development of: (a) task-tailored mutimode con- trol laws incorporating direct force and weapon line pointing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bach, Christian; Sherman, William; Pallis, Jani
Zinc finger nucleases (ZFNs) are associated with cell death and apoptosis by binding at countless undesired locations. This cytotoxicity is associated with the binding ability of engineered zinc finger domains to bind dissimilar DNA sequences with high affinity. In general, binding preferences of transcription factors are associated with significant degenerated diversity and complexity which convolutes the design and engineering of precise DNA binding domains. Evolutionary success of natural zinc finger proteins, however, evinces that nature created specific evolutionary traits and strategies, such as modularity and rank-specific recognition to cope with binding complexity that are critical for creating clinical viable toolsmore » to precisely modify the human genome. Our findings indicate preservation of general modularity and significant alteration of the rank-specific binding preferences of the three-finger binding domain of transcription factor SP1 when exchanging amino acids in the 2nd finger.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agazzone, U.; Ausiello, F.P.
1981-06-23
A power-generating installation comprises a plurality of modular power plants each comprised of an internal combustion engine connected to an electric machine. The electric machine is used to start the engine and thereafter operates as a generator supplying power to an electrical network common to all the modular plants. The installation has a control and protection system comprising a plurality of control modules each associated with a respective plant, and a central unit passing control signals to the modules to control starting and stopping of the individual power plants. Upon the detection of abnormal operation or failure of its associatedmore » power plant, each control module transmits an alarm signal back to the central unit which thereupon stops, or prevents the starting, of the corresponding power plant. Parameters monitored by each control module include generated current and inter-winding leakage current of the electric machine.« less
Bach, Christian; Sherman, William; Pallis, Jani; ...
2014-01-01
Zinc finger nucleases (ZFNs) are associated with cell death and apoptosis by binding at countless undesired locations. This cytotoxicity is associated with the binding ability of engineered zinc finger domains to bind dissimilar DNA sequences with high affinity. In general, binding preferences of transcription factors are associated with significant degenerated diversity and complexity which convolutes the design and engineering of precise DNA binding domains. Evolutionary success of natural zinc finger proteins, however, evinces that nature created specific evolutionary traits and strategies, such as modularity and rank-specific recognition to cope with binding complexity that are critical for creating clinical viable toolsmore » to precisely modify the human genome. Our findings indicate preservation of general modularity and significant alteration of the rank-specific binding preferences of the three-finger binding domain of transcription factor SP1 when exchanging amino acids in the 2nd finger.« less
Design of a Modular 5-kW Power Processing Unit for the Next-Generation 40-cm Ion Engine
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Bond, Thomas; Okada, Don; Pyter, Janusz; Wiseman, Steve
2002-01-01
NASA Glenn Research Center is developing a 5/10-kW ion engine for a broad range of mission applications. Simultaneously, a 5-kW breadboard poster processing unit is being designed and fabricated. The design includes a beam supply consisting of four 1.1 kW power modules connected in parallel, equally sharing the output current. A novel phase-shifted/pulse-width-modulated dual full-bridge topology was chosen for its soft-switching characteristics. The proposed modular approach allows scalability to higher powers as well as the possibility of implementing an N+1 redundant beam supply. Efficiencies in excess of 96% were measured during testing of a breadboard beam power module. A specific mass of 3.0 kg/kW is expected for a flight PRO. This represents a 50% reduction from the state of the art NSTAR power processor.
Grid-based platform for training in Earth Observation
NASA Astrophysics Data System (ADS)
Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor
2010-05-01
GiSHEO platform [1] providing on-demand services for training and high education in Earth Observation is developed, in the frame of an ESA funded project through its PECS programme, to respond to the needs of powerful education resources in remote sensing field. It intends to be a Grid-based platform of which potential for experimentation and extensibility are the key benefits compared with a desktop software solution. Near-real time applications requiring simultaneous multiple short-time-response data-intensive tasks, as in the case of a short time training event, are the ones that are proved to be ideal for this platform. The platform is based on Globus Toolkit 4 facilities for security and process management, and on the clusters of four academic institutions involved in the project. The authorization uses a VOMS service. The main public services are the followings: the EO processing services (represented through special WSRF-type services); the workflow service exposing a particular workflow engine; the data indexing and discovery service for accessing the data management mechanisms; the processing services, a collection allowing easy access to the processing platform. The WSRF-type services for basic satellite image processing are reusing free image processing tools, OpenCV and GDAL. New algorithms and workflows were develop to tackle with challenging problems like detecting the underground remains of old fortifications, walls or houses. More details can be found in [2]. Composed services can be specified through workflows and are easy to be deployed. The workflow engine, OSyRIS (Orchestration System using a Rule based Inference Solution), is based on DROOLS, and a new rule-based workflow language, SILK (SImple Language for worKflow), has been built. Workflow creation in SILK can be done with or without a visual designing tools. The basics of SILK are the tasks and relations (rules) between them. It is similar with the SCUFL language, but not relying on XML in order to allow the introduction of more workflow specific issues. Moreover, an event-condition-action (ECA) approach allows a greater flexibility when expressing data and task dependencies, as well as the creation of adaptive workflows which can react to changes in the configuration of the Grid or in the workflow itself. Changes inside the grid are handled by creating specific rules which allow resource selection based on various task scheduling criteria. Modifications of the workflow are usually accomplished either by inserting or retracting at runtime rules belonging to it or by modifying the executor of the task in case a better one is found. The former implies changes in its structure while the latter does not necessarily mean changes of the resource but more precisely changes of the algorithm used for solving the task. More details can be found in [3]. Another important platform component is the data indexing and storage service, GDIS, providing features for data storage, indexing data using a specialized RDBMS, finding data by various conditions, querying external services and keeping track of temporary data generated by other components. The data storage component part of GDIS is responsible for storing the data by using available storage backends such as local disk file systems (ext3), local cluster storage (GFS) or distributed file systems (HDFS). A front-end GridFTP service is capable of interacting with the storage domains on behalf of the clients and in a uniform way and also enforces the security restrictions provided by other specialized services and related with data access. The data indexing is performed by PostGIS. An advanced and flexible interface for searching the project's geographical repository is built around a custom query language (LLQL - Lisp Like Query Language) designed to provide fine grained access to the data in the repository and to query external services (e.g. for exploiting the connection with GENESI-DR catalog). More details can be found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [2] D. Petcu, D. Zaharie, M. Neagul, S. Panica, M. Frincu, D. Gorgan, T. Stefanut, V. Bacu, Remote Sensed Image Processing on Grids for Training in Earth Observation. In Image Processing, V. Kordic (ed.), In-Tech, January 2010. [3] M. Neagul, S. Panica, D. Petcu, D. Zaharie, D. Gorgan, Web and Grid Services for Training in Earth Observation, IDAACS 2009, IEEE Computer Press, 241-246 [4] M. Frincu, S. Panica, M. Neagul, D. Petcu, Gisheo: On Demand Grid Service Based Platform for EO Data Processing. HiperGrid 2009, Politehnica Press, 415-422. [5] D. Gorgan, T. Stefanut, V. Bacu, Grid Based Training Environment for Earth Observation, GPC 2009, LNCS 5529, 98-109
Quality Assurance Program Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halford, Vaughn Edward; Ryder, Ann Marie
Effective May 1, 2017, led by a new executive leadership team, Sandia began operating within a new organizational structure. National Technology and Engineering Solutions of Sandia (Sandia’s) Quality Assurance Program (QAP) was established to assign responsibilities and authorities, define workflow policies and requirements, and provide for the performance and assessment of work.
Desktop Publishing in the University: Current Progress, Future Visions.
ERIC Educational Resources Information Center
Smith, Thomas W.
1989-01-01
Discussion of the workflow involved in desktop publishing focuses on experiences at the College of Engineering at the University of Wisconsin at Madison. Highlights include cost savings and productivity gains in page layout and composition; editing, translation, and revision issues; printing and distribution; and benefits to the reader. (LRW)
ERIC Educational Resources Information Center
Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew
2015-01-01
Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…
Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.
Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S
The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.
SearchGUI: A Highly Adaptable Common Interface for Proteomics Search and de Novo Engines.
Barsnes, Harald; Vaudel, Marc
2018-05-25
Mass-spectrometry-based proteomics has become the standard approach for identifying and quantifying proteins. A vital step consists of analyzing experimentally generated mass spectra to identify the underlying peptide sequences for later mapping to the originating proteins. We here present the latest developments in SearchGUI, a common open-source interface for the most frequently used freely available proteomics search and de novo engines that has evolved into a central component in numerous bioinformatics workflows.
Large Scale Software Building with CMake in ATLAS
NASA Astrophysics Data System (ADS)
Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration
2017-10-01
The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.
Müllner, Markus; Cui, Jiwei; Noi, Ka Fung; Gunawan, Sylvia T; Caruso, Frank
2014-06-03
We report a templating approach for the preparation of functional polymer replica particles via surface-initiated polymerization in mesoporous silica templates. Subsequent removal of the template resulted in discrete polymer particles. Furthermore, redox-responsive replica particles could be engineered to disassemble in a reducing environment. Particles, made of poly(methacryloyloxyethyl phosphorylcholine) (PMPC) or poly[oligo(ethylene glycol) methyl ether methacrylate] (POEGMA), exhibited very low association to human cancer cells (below 5%), which renders the reported charge-neutral polymer particles a modular and versatile class of highly functional carriers with potential applications in drug delivery.
Using collective variables to drive molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Fiorin, Giacomo; Klein, Michael L.; Hénin, Jérôme
2013-12-01
A software framework is introduced that facilitates the application of biasing algorithms to collective variables of the type commonly employed to drive massively parallel molecular dynamics (MD) simulations. The modular framework that is presented enables one to combine existing collective variables into new ones, and combine any chosen collective variable with available biasing methods. The latter include the classic time-dependent biases referred to as steered MD and targeted MD, the temperature-accelerated MD algorithm, as well as the adaptive free-energy biases called metadynamics and adaptive biasing force. The present modular software is extensible, and portable between commonly used MD simulation engines.
NASA Astrophysics Data System (ADS)
Zhang, Min; He, Weiyi
2018-06-01
Under the guidance of principal-agent theory and modular theory, the collaborative innovation of green technology-based companies, design contractors and project builders based on united agency will provide direction for the development of green construction supply chain in the future. After analyzing the existing independent agencies, this paper proposes the industry-university-research bilateral collaborative innovation network architecture and modularization with the innovative function of engineering design in the context of non-standard transformation interfaces, analyzes the innovation responsibility center, and gives some countermeasures and suggestions to promote the performance of bilateral cooperative innovation network.
NASA Astrophysics Data System (ADS)
Hodgkins, Alex Liam; Diez, Victor; Hegner, Benedikt
2012-12-01
The Software Process & Infrastructure (SPI) project provides a build infrastructure for regular integration testing and release of the LCG Applications Area software stack. In the past, regular builds have been provided using a system which has been constantly growing to include more features like server-client communication, long-term build history and a summary web interface using present-day web technologies. However, the ad-hoc style of software development resulted in a setup that is hard to monitor, inflexible and difficult to expand. The new version of the infrastructure is based on the Django Python framework, which allows for a structured and modular design, facilitating later additions. Transparency in the workflows and ease of monitoring has been one of the priorities in the design. Formerly missing functionality like on-demand builds or release triggering will support the transition to a more agile development process.
Provenance for Runtime Workflow Steering and Validation in Computational Seismology
NASA Astrophysics Data System (ADS)
Spinuso, A.; Krischer, L.; Krause, A.; Filgueira, R.; Magnoni, F.; Muraleedharan, V.; David, M.
2014-12-01
Provenance systems may be offered by modern workflow engines to collect metadata about the data transformations at runtime. If combined with effective visualisation and monitoring interfaces, these provenance recordings can speed up the validation process of an experiment, suggesting interactive or automated interventions with immediate effects on the lifecycle of a workflow run. For instance, in the field of computational seismology, if we consider research applications performing long lasting cross correlation analysis and high resolution simulations, the immediate notification of logical errors and the rapid access to intermediate results, can produce reactions which foster a more efficient progress of the research. These applications are often executed in secured and sophisticated HPC and HTC infrastructures, highlighting the need for a comprehensive framework that facilitates the extraction of fine grained provenance and the development of provenance aware components, leveraging the scalability characteristics of the adopted workflow engines, whose enactment can be mapped to different technologies (MPI, Storm clusters, etc). This work looks at the adoption of W3C-PROV concepts and data model within a user driven processing and validation framework for seismic data, supporting also computational and data management steering. Validation needs to balance automation with user intervention, considering the scientist as part of the archiving process. Therefore, the provenance data is enriched with community-specific metadata vocabularies and control messages, making an experiment reproducible and its description consistent with the community understandings. Moreover, it can contain user defined terms and annotations. The current implementation of the system is supported by the EU-Funded VERCE (http://verce.eu). It provides, as well as the provenance generation mechanisms, a prototypal browser-based user interface and a web API built on top of a NoSQL storage technology, experimenting ways to ensure a rapid and flexible access to the lineage traces. It supports the users with the visualisation of graphical products and offers combined operations to access and download the data which may be selectively stored at runtime, into dedicated data archives.
TVC actuator model. [for the space shuttle main engine
NASA Technical Reports Server (NTRS)
Baslock, R. W.
1977-01-01
A prototype Space Shuttle Main Engine (SSME) Thrust Vector Control (TVC) Actuator analog model was successfully completed. The prototype, mounted on five printed circuit (PC) boards, was delivered to NASA, checked out and tested using a modular replacement technique on an analog computer. In all cases, the prototype model performed within the recording techniques of the analog computer which is well within the tolerances of the specifications.
The final days of Solar Max - Lessons learned from engineering evaluation tests
NASA Technical Reports Server (NTRS)
Donnelly, Michael L.; Croft, John W.; Ward, David K.; Thames, Michael A.
1990-01-01
End-of-life engineering evaluation tests were performed on Solar Max between October and November 1989. The tests included four-wheel control law operation; reaction wheel rundowns; modular power subsystem standard power regulator unit voltage-temperature level tests; battery rundown/2nd plateau determination; high gain antenna retraction and jettison; and solar array jettison. This paper presents these tests, their results, and the lessons learned from them.
Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian
2016-03-04
Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.
Developing an Integration Infrastructure for Distributed Engine Control Technologies
NASA Technical Reports Server (NTRS)
Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan
2014-01-01
Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rea, Jonathan E.; Oshman, Christopher J.; Olsen, Michele L.
In this paper, we present performance simulations and techno-economic analysis of a modular dispatchable solar power tower. Using a heliostat field and power block three orders of magnitude smaller than conventional solar power towers, our unique configuration locates thermal storage and a power block directly on a tower receiver. To make the system dispatchable, a valved thermosyphon controls heat flow from a latent heat thermal storage tank to a Stirling engine. The modular design results in minimal balance of system costs and enables high deployment rates with a rapid realization of economies of scale. In this new analysis, we combinemore » performance simulations with techno-economic analysis to evaluate levelized cost of electricity, and find that the system has potential for cost-competitiveness with natural gas peaking plants and alternative dispatchable renewables.« less
A spatially localized architecture for fast and modular DNA computing
NASA Astrophysics Data System (ADS)
Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg
2017-09-01
Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.
Ribo-attenuators: novel elements for reliable and modular riboswitch engineering.
Folliard, Thomas; Mertins, Barbara; Steel, Harrison; Prescott, Thomas P; Newport, Thomas; Jones, Christopher W; Wadhams, George; Bayer, Travis; Armitage, Judith P; Papachristodoulou, Antonis; Rothschild, Lynn J
2017-07-04
Riboswitches are structural genetic regulatory elements that directly couple the sensing of small molecules to gene expression. They have considerable potential for applications throughout synthetic biology and bio-manufacturing as they are able to sense a wide range of small molecules and regulate gene expression in response. Despite over a decade of research they have yet to reach this considerable potential as they cannot yet be treated as modular components. This is due to several limitations including sensitivity to changes in genetic context, low tunability, and variability in performance. To overcome the associated difficulties with riboswitches, we have designed and introduced a novel genetic element called a ribo-attenuator in Bacteria. This genetic element allows for predictable tuning, insulation from contextual changes, and a reduction in expression variation. Ribo-attenuators allow riboswitches to be treated as truly modular and tunable components, thus increasing their reliability for a wide range of applications.
A Bioinformatics Workflow for Variant Peptide Detection in Shotgun Proteomics*
Li, Jing; Su, Zengliu; Ma, Ze-Qiang; Slebos, Robbert J. C.; Halvey, Patrick; Tabb, David L.; Liebler, Daniel C.; Pao, William; Zhang, Bing
2011-01-01
Shotgun proteomics data analysis usually relies on database search. However, commonly used protein sequence databases do not contain information on protein variants and thus prevent variant peptides and proteins from been identified. Including known coding variations into protein sequence databases could help alleviate this problem. Based on our recently published human Cancer Proteome Variation Database, we have created a protein sequence database that comprehensively annotates thousands of cancer-related coding variants collected in the Cancer Proteome Variation Database as well as noncancer-specific ones from the Single Nucleotide Polymorphism Database (dbSNP). Using this database, we then developed a data analysis workflow for variant peptide identification in shotgun proteomics. The high risk of false positive variant identifications was addressed by a modified false discovery rate estimation method. Analysis of colorectal cancer cell lines SW480, RKO, and HCT-116 revealed a total of 81 peptides that contain either noncancer-specific or cancer-related variations. Twenty-three out of 26 variants randomly selected from the 81 were confirmed by genomic sequencing. We further applied the workflow on data sets from three individual colorectal tumor specimens. A total of 204 distinct variant peptides were detected, and five carried known cancer-related mutations. Each individual showed a specific pattern of cancer-related mutations, suggesting potential use of this type of information for personalized medicine. Compatibility of the workflow has been tested with four popular database search engines including Sequest, Mascot, X!Tandem, and MyriMatch. In summary, we have developed a workflow that effectively uses existing genomic data to enable variant peptide detection in proteomics. PMID:21389108
Planning bioinformatics workflows using an expert system.
Chen, Xiaoling; Chang, Jeffrey T
2017-04-15
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Planning bioinformatics workflows using an expert system
Chen, Xiaoling; Chang, Jeffrey T.
2017-01-01
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928
Challenges and opportunities in synthetic biology for chemical engineers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, YZ; Lee, JK; Zhao, HM
Synthetic biology provides numerous great opportunities for chemical engineers in the development of new processes for large-scale production of biofuels, value-added chemicals, and protein therapeutics. However, challenges across all scales abound. In particular, the modularization and standardization of the components in a biological system, so-called biological parts, remain the biggest obstacle in synthetic biology. In this perspective, we will discuss the main challenges and opportunities in the rapidly growing synthetic biology field and the important roles that chemical engineers can play in its advancement. (C) 2012 Elsevier Ltd. All rights reserved.
Challenges and opportunities in synthetic biology for chemical engineers
Luo, Yunzi; Lee, Jung-Kul; Zhao, Huimin
2012-01-01
Synthetic biology provides numerous great opportunities for chemical engineers in the development of new processes for large-scale production of biofuels, value-added chemicals, and protein therapeutics. However, challenges across all scales abound. In particular, the modularization and standardization of the components in a biological system, so-called biological parts, remain the biggest obstacle in synthetic biology. In this perspective, we will discuss the main challenges and opportunities in the rapidly growing synthetic biology field and the important roles that chemical engineers can play in its advancement. PMID:24222925
Challenges and opportunities in synthetic biology for chemical engineers.
Luo, Yunzi; Lee, Jung-Kul; Zhao, Huimin
2013-11-15
Synthetic biology provides numerous great opportunities for chemical engineers in the development of new processes for large-scale production of biofuels, value-added chemicals, and protein therapeutics. However, challenges across all scales abound. In particular, the modularization and standardization of the components in a biological system, so-called biological parts, remain the biggest obstacle in synthetic biology. In this perspective, we will discuss the main challenges and opportunities in the rapidly growing synthetic biology field and the important roles that chemical engineers can play in its advancement.
Chuan, Yap P; Rivera-Hernandez, Tania; Wibowo, Nani; Connors, Natalie K; Wu, Yang; Hughes, Fiona K; Lua, Linda H L; Middelberg, Anton P J
2013-09-01
Modularization of a peptide antigen for presentation on a microbially synthesized murine polyomavirus (MuPyV) virus-like particle (VLP) offers a new alternative for rapid and low-cost vaccine delivery at a global scale. In this approach, heterologous modules containing peptide antigenic elements are fused to and displayed on the VLP carrier, allowing enhancement of peptide immunogenicity via ordered and densely repeated presentation of the modules. This study addresses two key engineering questions pertaining to this platform, exploring the effects of (i) pre-existing carrier-specific immunity on modular VLP vaccine effectiveness and (ii) increase in the antigenic element number per VLP on peptide-specific immune response. These effects were studied in a mouse model and with modular MuPyV VLPs presenting a group A streptococcus (GAS) peptide antigen, J8i. The data presented here demonstrate that immunization with a modular VLP could induce high levels of J8i-specific antibodies despite a strong pre-existing anti-carrier immune response. Doubling of the J8i antigenic element number per VLP did not enhance J8i immunogenicity at a constant peptide dose. However, the strategy, when used in conjunction with increased VLP dose, could effectively increase the peptide dose up to 10-fold, leading to a significantly higher J8i-specific antibody titer. This study further supports feasibility of the MuPyV modular VLP vaccine platform by showing that, in the absence of adjuvant, modularized GAS antigenic peptide at a dose as low as 150 ng was sufficient to raise a high level of peptide-specific IgGs indicative of bactericidal activity. Copyright © 2013 Wiley Periodicals, Inc.
Orbit Transfer Rocket Engine Technology Program: Advanced engine study, task D.1/D.3
NASA Technical Reports Server (NTRS)
Martinez, A.; Erickson, C.; Hines, B.
1986-01-01
Concepts for space maintainability of OTV engines were examined. An engine design was developed which was driven by space maintenance requirements and by a failure mode and effects (FME) analysis. Modularity within the engine was shown to offer cost benefits and improved space maintenance capabilities. Space operable disconnects were conceptualized for both engine change-out and for module replacement. Through FME mitigation the modules were conceptualized to contain the least reliable and most often replaced engine components. A preliminary space maintenance plan was developed around a controls and condition monitoring system using advanced sensors, controls, and condition monitoring concepts. A complete engine layout was prepared satisfying current vehicle requirements and utilizing projected component advanced technologies. A technology plan for developing the required technology was assembled.
Computer program for a four-cylinder-Stirling-engine controls simulation
NASA Technical Reports Server (NTRS)
Daniels, C. J.; Lorenzo, C. F.
1982-01-01
A four cylinder Stirling engine, transient engine simulation computer program is presented. The program is intended for controls analysis. The associated engine model was simplified to shorten computer calculation time. The model includes engine mechanical drive dynamics and vehicle load effects. The computer program also includes subroutines that allow: (1) acceleration of the engine by addition of hydrogen to the system, and (2) braking of the engine by short circuiting of the working spaces. Subroutines to calculate degraded engine performance (e.g., due to piston ring and piston rod leakage) are provided. Input data required to run the program are described and flow charts are provided. The program is modular to allow easy modification of individual routines. Examples of steady state and transient results are presented.
The GridEcon Platform: A Business Scenario Testbed for Commercial Cloud Services
NASA Astrophysics Data System (ADS)
Risch, Marcel; Altmann, Jörn; Guo, Li; Fleming, Alan; Courcoubetis, Costas
Within this paper, we present the GridEcon Platform, a testbed for designing and evaluating economics-aware services in a commercial Cloud computing setting. The Platform is based on the idea that the exact working of such services is difficult to predict in the context of a market and, therefore, an environment for evaluating its behavior in an emulated market is needed. To identify the components of the GridEcon Platform, a number of economics-aware services and their interactions have been envisioned. The two most important components of the platform are the Marketplace and the Workflow Engine. The Workflow Engine allows the simple composition of a market environment by describing the service interactions between economics-aware services. The Marketplace allows trading goods using different market mechanisms. The capabilities of these components of the GridEcon Platform in conjunction with the economics-aware services are described in this paper in detail. The validation of an implemented market mechanism and a capacity planning service using the GridEcon Platform also demonstrated the usefulness of the GridEcon Platform.
ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Blanford; E. Keldrauk; M. Laufer
2010-09-20
Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement,more » and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using factory prefabricated structural modules, for application to external event shell and base isolated structures.« less
Proteomics in the genome engineering era.
Vandemoortele, Giel; Gevaert, Kris; Eyckerman, Sven
2016-01-01
Genome engineering experiments used to be lengthy, inefficient, and often expensive, preventing a widespread adoption of such experiments for the full assessment of endogenous protein functions. With the revolutionary clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 technology, genome engineering became accessible to the broad life sciences community and is now implemented in several research areas. One particular field that can benefit significantly from this evolution is proteomics where a substantial impact on experimental design and general proteome biology can be expected. In this review, we describe the main applications of genome engineering in proteomics, including the use of engineered disease models and endogenous epitope tagging. In addition, we provide an overview on current literature and highlight important considerations when launching genome engineering technologies in proteomics workflows. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
MAPI: towards the integrated exploitation of bioinformatics Web Services.
Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo
2011-10-27
Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).
Evaluating 3D-printed biomaterials as scaffolds for vascularized bone tissue engineering.
Wang, Martha O; Vorwald, Charlotte E; Dreher, Maureen L; Mott, Eric J; Cheng, Ming-Huei; Cinar, Ali; Mehdizadeh, Hamidreza; Somo, Sami; Dean, David; Brey, Eric M; Fisher, John P
2015-01-07
There is an unmet need for a consistent set of tools for the evaluation of 3D-printed constructs. A toolbox developed to design, characterize, and evaluate 3D-printed poly(propylene fumarate) scaffolds is proposed for vascularized engineered tissues. This toolbox combines modular design and non-destructive fabricated design evaluation, evaluates biocompatibility and mechanical properties, and models angiogenesis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Synthetic Microbial Ecology: Engineering Habitats for Modular Consortia
Ben Said, Sami; Or, Dani
2017-01-01
The metabolic diversity present in microbial communities enables cooperation toward accomplishing more complex tasks than possible by a single organism. Members of a consortium communicate by exchanging metabolites or signals that allow them to coordinate their activity through division of labor. In contrast with monocultures, evidence suggests that microbial consortia self-organize to form spatial patterns, such as observed in biofilms or in soil aggregates, that enable them to respond to gradient, to improve resource interception and to exchange metabolites more effectively. Current biotechnological applications of microorganisms remain rudimentary, often relying on genetically engineered monocultures (e.g., pharmaceuticals) or mixed-cultures of partially known composition (e.g., wastewater treatment), yet the vast potential of “microbial ecological power” observed in most natural environments, remains largely underused. In line with the Unified Microbiome Initiative (UMI) which aims to “discover and advance tools to understand and harness the capabilities of Earth's microbial ecosystems,” we propose in this concept paper to capitalize on ecological insights into the spatial and modular design of interlinked microbial consortia that would overcome limitations of natural systems and attempt to optimize the functionality of the members and the performance of the engineered consortium. The topology of the spatial connections linking the various members and the regulated fluxes of media between those modules, while representing a major engineering challenge, would allow the microbial species to interact. The modularity of such spatially linked microbial consortia (SLMC) could facilitate the design of scalable bioprocesses that can be incorporated as parts of a larger biochemical network. By reducing the need for a compatible growth environment for all species simultaneously, SLMC will dramatically expand the range of possible combinations of microorganisms and their potential applications. We briefly review existing tools to engineer such assemblies and optimize potential benefits resulting from the collective activity of their members. Prospective microbial consortia and proposed spatial configurations will be illustrated and preliminary calculations highlighting the advantages of SLMC over co-cultures will be presented, followed by a discussion of challenges and opportunities for moving forward with some designs. PMID:28670307
Synthetic Microbial Ecology: Engineering Habitats for Modular Consortia.
Ben Said, Sami; Or, Dani
2017-01-01
The metabolic diversity present in microbial communities enables cooperation toward accomplishing more complex tasks than possible by a single organism. Members of a consortium communicate by exchanging metabolites or signals that allow them to coordinate their activity through division of labor. In contrast with monocultures, evidence suggests that microbial consortia self-organize to form spatial patterns, such as observed in biofilms or in soil aggregates, that enable them to respond to gradient, to improve resource interception and to exchange metabolites more effectively. Current biotechnological applications of microorganisms remain rudimentary, often relying on genetically engineered monocultures (e.g., pharmaceuticals) or mixed-cultures of partially known composition (e.g., wastewater treatment), yet the vast potential of "microbial ecological power" observed in most natural environments, remains largely underused. In line with the Unified Microbiome Initiative (UMI) which aims to "discover and advance tools to understand and harness the capabilities of Earth's microbial ecosystems," we propose in this concept paper to capitalize on ecological insights into the spatial and modular design of interlinked microbial consortia that would overcome limitations of natural systems and attempt to optimize the functionality of the members and the performance of the engineered consortium. The topology of the spatial connections linking the various members and the regulated fluxes of media between those modules, while representing a major engineering challenge, would allow the microbial species to interact. The modularity of such spatially linked microbial consortia (SLMC) could facilitate the design of scalable bioprocesses that can be incorporated as parts of a larger biochemical network. By reducing the need for a compatible growth environment for all species simultaneously, SLMC will dramatically expand the range of possible combinations of microorganisms and their potential applications. We briefly review existing tools to engineer such assemblies and optimize potential benefits resulting from the collective activity of their members. Prospective microbial consortia and proposed spatial configurations will be illustrated and preliminary calculations highlighting the advantages of SLMC over co-cultures will be presented, followed by a discussion of challenges and opportunities for moving forward with some designs.
An Integrated Cyberenvironment for Event-Driven Environmental Observatory Research and Education
NASA Astrophysics Data System (ADS)
Myers, J.; Minsker, B.; Butler, R.
2006-12-01
National environmental observatories will soon provide large-scale data from diverse sensor networks and community models. While much attention is focused on piping data from sensors to archives and users, truly integrating these resources into the everyday research activities of scientists and engineers across the community, and enabling their results and innovations to be brought back into the observatory, also critical to long-term success of the observatories, is often neglected. This talk will give an overview of the Environmental Cyberinfrastructure Demonstrator (ECID) Cyberenvironment for observatory-centric environmental research and education, under development at the National Center for Supercomputing Applications (NCSA), which is designed to address these issues. Cyberenvironments incorporate collaboratory and grid technologies, web services, and other cyberinfrastructure into an overall framework that balances needs for efficient coordination and the ability to innovate. They are designed to support the full scientific lifecycle both in terms of individual experiments moving from data to workflows to publication and at the macro level where new discoveries lead to additional data, models, tools, and conceptual frameworks that augment and evolve community-scale systems such as observatories. The ECID cyberenvironment currently integrates five major components a collaborative portal, workflow engine, event manager, metadata repository, and social network personalization capabilities - that have novel features inspired by the Cyberenvironment concept and enabling powerful environmental research scenarios. A summary of these components and the overall cyberenvironment will be given in this talk, while other posters will give details on several of the components. The summary will be presented within the context of environmental use case scenarios created in collaboration with researchers from the WATERS (WATer and Environmental Research Systems) Network, a joint National Science Foundation-funded initiative of the hydrology and environmental engineering communities. The use case scenarios include identifying sensor anomalies in point- and streaming sensor data and notifying data managers in near-real time; and referring users of data or data products (e.g., workflows, publications) to related data or data products.
MODULAR FIELD-BIOREACTOR FOR ACID MINE DRAINAGE TREATMENT
The presentation focuses on the improvements to engineered features of a passive technology that has been used for remediation of acid rock drainage (ARD). This passive remedial technology, a sulfate-reducing bacteria (SRB) bioreactor, takes advantage of the ability of SRB that,...
NASA Technical Reports Server (NTRS)
Proctor, B. W.; Reysa, R. P.; Russell, D. J.
1975-01-01
Housekeeping, off-duty, and medical data concerning the appliances considered for the space station are presented. Appliance functions analyzed include: cleanup, collection, processing and storage of refuse; crew entertainment and physical exercise, and the autoclaves and ergometers.
Ren, Hengqian; Hu, Pingfan; Zhao, Huimin
2017-08-01
Pathway refactoring serves as an invaluable synthetic biology tool for natural product discovery, characterization, and engineering. However, the complicated and laborious molecular biology techniques largely hinder its application in natural product research, especially in a high-throughput manner. Here we report a plug-and-play pathway refactoring workflow for high-throughput, flexible pathway construction, and expression in both Escherichia coli and Saccharomyces cerevisiae. Biosynthetic genes were firstly cloned into pre-assembled helper plasmids with promoters and terminators, resulting in a series of expression cassettes. These expression cassettes were further assembled using Golden Gate reaction to generate fully refactored pathways. The inclusion of spacer plasmids in this system would not only increase the flexibility for refactoring pathways with different number of genes, but also facilitate gene deletion and replacement. As proof of concept, a total of 96 pathways for combinatorial carotenoid biosynthesis were built successfully. This workflow should be generally applicable to different classes of natural products produced by various organisms. Biotechnol. Bioeng. 2017;114: 1847-1854. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Zhou, Mowei; Paša-Tolić, Ljiljana; Stenoien, David L
2017-02-03
As histones play central roles in most chromosomal functions including regulation of DNA replication, DNA damage repair, and gene transcription, both their basic biology and their roles in disease development have been the subject of intense study. Because multiple post-translational modifications (PTMs) along the entire protein sequence are potential regulators of histones, a top-down approach, where intact proteins are analyzed, is ultimately required for complete characterization of proteoforms. However, significant challenges remain for top-down histone analysis primarily because of deficiencies in separation/resolving power and effective identification algorithms. Here we used state-of-the-art mass spectrometry and a bioinformatics workflow for targeted data analysis and visualization. The workflow uses ProMex for intact mass deconvolution, MSPathFinder as a search engine, and LcMsSpectator as a data visualization tool. When complemented with the open-modification tool TopPIC, this workflow enabled identification of novel histone PTMs including tyrosine bromination on histone H4 and H2A, H3 glutathionylation, and mapping of conventional PTMs along the entire protein for many histone subunits.
Advanced Combustion Numerics and Modeling - FY18 First Quarter Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitesides, R. A.; Killingsworth, N. J.; McNenly, M. J.
This project is focused on early stage research and development of numerical methods and models to improve advanced engine combustion concepts and systems. The current focus is on development of new mathematics and algorithms to reduce the time to solution for advanced combustion engine design using detailed fuel chemistry. The research is prioritized towards the most time-consuming workflow bottlenecks (computer and human) and accuracy gaps that slow ACS program members. Zero-RK, the fast and accurate chemical kinetics solver software developed in this project, is central to the research efforts and continues to be developed to address the current and emergingmore » needs of the engine designers, engine modelers and fuel mechanism developers.« less
Rasmussen, Luke V; Peissig, Peggy L; McCarty, Catherine A; Starren, Justin
2012-06-01
Although the penetration of electronic health records is increasing rapidly, much of the historical medical record is only available in handwritten notes and forms, which require labor-intensive, human chart abstraction for some clinical research. The few previous studies on automated extraction of data from these handwritten notes have focused on monolithic, custom-developed recognition systems or third-party systems that require proprietary forms. We present an optical character recognition processing pipeline, which leverages the capabilities of existing third-party optical character recognition engines, and provides the flexibility offered by a modular custom-developed system. The system was configured and run on a selected set of form fields extracted from a corpus of handwritten ophthalmology forms. The processing pipeline allowed multiple configurations to be run, with the optimal configuration consisting of the Nuance and LEADTOOLS engines running in parallel with a positive predictive value of 94.6% and a sensitivity of 13.5%. While limitations exist, preliminary experience from this project yielded insights on the generalizability and applicability of integrating multiple, inexpensive general-purpose third-party optical character recognition engines in a modular pipeline.
Peissig, Peggy L; McCarty, Catherine A; Starren, Justin
2011-01-01
Background Although the penetration of electronic health records is increasing rapidly, much of the historical medical record is only available in handwritten notes and forms, which require labor-intensive, human chart abstraction for some clinical research. The few previous studies on automated extraction of data from these handwritten notes have focused on monolithic, custom-developed recognition systems or third-party systems that require proprietary forms. Methods We present an optical character recognition processing pipeline, which leverages the capabilities of existing third-party optical character recognition engines, and provides the flexibility offered by a modular custom-developed system. The system was configured and run on a selected set of form fields extracted from a corpus of handwritten ophthalmology forms. Observations The processing pipeline allowed multiple configurations to be run, with the optimal configuration consisting of the Nuance and LEADTOOLS engines running in parallel with a positive predictive value of 94.6% and a sensitivity of 13.5%. Discussion While limitations exist, preliminary experience from this project yielded insights on the generalizability and applicability of integrating multiple, inexpensive general-purpose third-party optical character recognition engines in a modular pipeline. PMID:21890871
NASA Astrophysics Data System (ADS)
Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.
2005-12-01
Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser and uses intuitive functionality, stakeholders with diverse degrees of technical insight can use this system with little or no training.
NASA Astrophysics Data System (ADS)
Hallett, B. W.; Dere, A. L. D.; Lehnert, K.; Carter, M.
2016-12-01
Vast numbers of physical samples are routinely collected by geoscientists to probe key scientific questions related to global climate change, biogeochemical cycles, magmatic processes, mantle dynamics, etc. Despite their value as irreplaceable records of nature the majority of these samples remain undiscoverable by the broader scientific community because they lack a digital presence or are not well-documented enough to facilitate their discovery and reuse for future scientific and educational use. The NSF EarthCube iSamples Research Coordination Network seeks to develop a unified approach across all Earth Science disciplines for the registration, description, identification, and citation of physical specimens in order to take advantage of the new opportunities that cyberinfrastructure offers. Even as consensus around best practices begins to emerge, such as the use of the International Geo Sample Number (IGSN), more work is needed to communicate these practices to investigators to encourage widespread adoption. Recognizing the importance of students and early career scientists in particular to transforming data and sample management practices, the iSamples Education and Training Working Group is developing training modules for sample collection, documentation, and management workflows. These training materials are made available to educators/research supervisors online at http://earthcube.org/group/isamples and can be modularized for supervisors to create a customized research workflow. This study details the design and development of several sample management tutorials, created by early career scientists and documented in collaboration with undergraduate research students in field and lab settings. Modules under development focus on rock outcrops, rock cores, soil cores, and coral samples, with an emphasis on sample management throughout the collection, analysis and archiving process. We invite others to share their sample management/registration workflows and to develop training modules. This educational approach, with evolving digital materials, can help prepare future scientists to perform research in a way that will contribute to EarthCube data integration and discovery.
Mohammed, Yassene; Domański, Dominik; Jackson, Angela M; Smith, Derek S; Deelder, André M; Palmblad, Magnus; Borchers, Christoph H
2014-06-25
One challenge in Multiple Reaction Monitoring (MRM)-based proteomics is to select the most appropriate surrogate peptides to represent a target protein. We present here a software package to automatically generate these most appropriate surrogate peptides for an LC/MRM-MS analysis. Our method integrates information about the proteins, their tryptic peptides, and the suitability of these peptides for MRM which is available online in UniProtKB, NCBI's dbSNP, ExPASy, PeptideAtlas, PRIDE, and GPMDB. The scoring algorithm reflects our knowledge in choosing the best candidate peptides for MRM, based on the uniqueness of the peptide in the targeted proteome, its physiochemical properties, and whether it previously has been observed. The modularity of the workflow allows further extension and additional selection criteria to be incorporated. We have developed a simple Web interface where the researcher provides the protein accession number, the subject organism, and peptide-specific options. Currently, the software is designed for human and mouse proteomes, but additional species can be easily be added. Our software improved the peptide selection by eliminating human error, considering multiple data sources and all of the isoforms of the protein, and resulted in faster peptide selection - approximately 50 proteins per hour compared to 8 per day. Compiling a list of optimal surrogate peptides for target proteins to be analyzed by LC/MRM-MS has been a cumbersome process, in which expert researchers retrieved information from different online repositories and used their own reasoning to find the most appropriate peptides. Our scientific workflow automates this process by integrating information from different data sources including UniProt, Global Proteome Machine, NCBI's dbSNP, and PeptideAtlas, simulating the researchers' reasoning, and incorporating their knowledge of how to select the best proteotypic peptides for an MRM analysis. The developed software can help to standardize the selection of peptides, eliminate human error, and increase productivity. Copyright © 2014 Elsevier B.V. All rights reserved.
Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.
Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell
2011-07-26
Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.
a Workflow for UAV's Integration Into a Geodesign Platform
NASA Astrophysics Data System (ADS)
Anca, P.; Calugaru, A.; Alixandroae, I.; Nazarie, R.
2016-06-01
This paper presents a workflow for the development of various Geodesign scenarios. The subject is important in the context of identifying patterns and designing solutions for a Smart City with optimized public transportation, efficient buildings, efficient utilities, recreational facilities a.s.o.. The workflow describes the procedures starting with acquiring data in the field, data processing, orthophoto generation, DTM generation, integration into a GIS platform and analyzing for a better support for Geodesign. Esri's City Engine is used mostly for 3D modeling capabilities that enable the user to obtain 3D realistic models. The workflow uses as inputs information extracted from images acquired using UAVs technologies, namely eBee, existing 2D GIS geodatabases, and a set of CGA rules. The method that we used further, is called procedural modeling, and uses rules in order to extrude buildings, the street network, parcel zoning and side details, based on the initial attributes from the geodatabase. The resulted products are various scenarios for redesigning, for analyzing new exploitation sites. Finally, these scenarios can be published as interactive web scenes for internal, groups or pubic consultation. In this way, problems like the impact of new constructions being build, re-arranging green spaces or changing routes for public transportation, etc. are revealed through impact and visibility analysis or shadowing analysis and are brought to the citizen's attention. This leads to better decisions.
Modular flow chamber for engineering bone marrow architecture and function.
Di Buduo, Christian A; Soprano, Paolo M; Tozzi, Lorenzo; Marconi, Stefania; Auricchio, Ferdinando; Kaplan, David L; Balduini, Alessandra
2017-11-01
The bone marrow is a soft, spongy, gelatinous tissue found in the hollow cavities of flat and long bones that support hematopoiesis in order to maintain the physiologic turnover of all blood cells. Silk fibroin, derived from Bombyx mori silkworm cocoons, is a promising biomaterial for bone marrow engineering, because of its tunable architecture and mechanical properties, the capacity of incorporating labile compounds without loss of bioactivity and demonstrated ability to support blood cell formation. In this study, we developed a bone marrow scaffold consisting of a modular flow chamber made of polydimethylsiloxane, holding a silk sponge, prepared with salt leaching methods and functionalized with extracellular matrix components. The silk sponge was able to support efficient platelet formation when megakaryocytes were seeded in the system. Perfusion of the chamber allowed the recovery of functional platelets based on multiple activation tests. Further, inhibition of AKT signaling molecule, which has been shown to be crucial in regulating physiologic platelet formation, significantly reduced the number of collected platelets, suggesting the applicability of this tissue model for evaluation of the effects of bone marrow exposure to compounds that may affect platelet formation. In conclusion, we have bioengineered a novel modular system that, along with multi-porous silk sponges, can provide a useful technology for reproducing a simplified bone marrow scaffold for blood cell production ex vivo. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Han; Fang, Guochen; Wu, Hui; Li, Zhimin; Ye, Qin
2018-05-01
L-cysteine is an amino acid with important physiological functions and has a wide range of applications in medicine, food, animal feed, and cosmetics industry. In this study, the L-cysteine synthesis in Escherichia coliEscherichia coli is divided into four modules: the transport module, sulfur module, precursor module, and degradation module. The engineered strain LH03 (overexpression of the feedback-insensitive cysE and the exporter ydeD in JM109) accumulated 45.8 mg L -1 of L-cysteine in 48 hr with yield of 0.4% g/g glucose. Further modifications of strains and culture conditions which based on the rational metabolic engineering and modular strategy improved the L-cysteine biosynthesis significantly. The engineered strain LH06 (with additional overexpression of serA, serC, and serB and double mutant of tnaA and sdaA in LH03) produced 620.9 mg L -1 of L-cysteine with yield of 6.0% g/g glucose, which increased the production by 12 times and the yield by 14 times more than those of LH03 in the original condition. In fed-batch fermentation performed in a 5-L reactor, the concentration of L-cysteine achieved 5.1 g L -1 in 32 hr. This work demonstrates that the combination of rational metabolic engineering and module strategy is a promising approach for increasing the L-cysteine production in E. coli. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
2011-01-01
Background Genome-scale metabolic network models have contributed to elucidating biological phenomena, and predicting gene targets to engineer for biotechnological applications. With their increasing importance, their precise network characterization has also been crucial for better understanding of the cellular physiology. Results We herein introduce a framework for network modularization and Bayesian network analysis (FMB) to investigate organism’s metabolism under perturbation. FMB reveals direction of influences among metabolic modules, in which reactions with similar or positively correlated flux variation patterns are clustered, in response to specific perturbation using metabolic flux data. With metabolic flux data calculated by constraints-based flux analysis under both control and perturbation conditions, FMB, in essence, reveals the effects of specific perturbations on the biological system through network modularization and Bayesian network analysis at metabolic modular level. As a demonstration, this framework was applied to the genetically perturbed Escherichia coli metabolism, which is a lpdA gene knockout mutant, using its genome-scale metabolic network model. Conclusions After all, it provides alternative scenarios of metabolic flux distributions in response to the perturbation, which are complementary to the data obtained from conventionally available genome-wide high-throughput techniques or metabolic flux analysis. PMID:22784571
Kim, Hyun Uk; Kim, Tae Yong; Lee, Sang Yup
2011-01-01
Genome-scale metabolic network models have contributed to elucidating biological phenomena, and predicting gene targets to engineer for biotechnological applications. With their increasing importance, their precise network characterization has also been crucial for better understanding of the cellular physiology. We herein introduce a framework for network modularization and Bayesian network analysis (FMB) to investigate organism's metabolism under perturbation. FMB reveals direction of influences among metabolic modules, in which reactions with similar or positively correlated flux variation patterns are clustered, in response to specific perturbation using metabolic flux data. With metabolic flux data calculated by constraints-based flux analysis under both control and perturbation conditions, FMB, in essence, reveals the effects of specific perturbations on the biological system through network modularization and Bayesian network analysis at metabolic modular level. As a demonstration, this framework was applied to the genetically perturbed Escherichia coli metabolism, which is a lpdA gene knockout mutant, using its genome-scale metabolic network model. After all, it provides alternative scenarios of metabolic flux distributions in response to the perturbation, which are complementary to the data obtained from conventionally available genome-wide high-throughput techniques or metabolic flux analysis.
Modular closed-loop control of diabetes.
Patek, S D; Magni, L; Dassau, E; Karvetski, C; Toffanin, C; De Nicolao, G; Del Favero, S; Breton, M; Man, C Dalla; Renard, E; Zisser, H; Doyle, F J; Cobelli, C; Kovatchev, B P
2012-11-01
Modularity plays a key role in many engineering systems, allowing for plug-and-play integration of components, enhancing flexibility and adaptability, and facilitating standardization. In the control of diabetes, i.e., the so-called "artificial pancreas," modularity allows for the step-wise introduction of (and regulatory approval for) algorithmic components, starting with subsystems for assured patient safety and followed by higher layer components that serve to modify the patient's basal rate in real time. In this paper, we introduce a three-layer modular architecture for the control of diabetes, consisting in a sensor/pump interface module (IM), a continuous safety module (CSM), and a real-time control module (RTCM), which separates the functions of insulin recommendation (postmeal insulin for mitigating hyperglycemia) and safety (prevention of hypoglycemia). In addition, we provide details of instances of all three layers of the architecture: the APS© serving as the IM, the safety supervision module (SSM) serving as the CSM, and the range correction module (RCM) serving as the RTCM. We evaluate the performance of the integrated system via in silico preclinical trials, demonstrating 1) the ability of the SSM to reduce the incidence of hypoglycemia under nonideal operating conditions and 2) the ability of the RCM to reduce glycemic variability.
The development of a lightweight modular compliant surface bio-inspired robot
NASA Astrophysics Data System (ADS)
Stone, David L.; Cranney, John
2004-09-01
The DARPA Sponsored Compliant Surface Robotics (CSR) program pursues development of a high mobility, lightweight, modular, morphable robot for military forces in the field and for other industrial uses. The USTLAB effort builds on proof of concept feasibility studies and demonstration of a 4, 6, or 8 wheeled modular vehicle with articulated leg-wheel assemblies. In Phase I, basic open plant stability was proven for climbing over obstacles of ~18 inches high and traversing ~75 degree inclines (up, down, or sideways) in a platform of approximately 15 kilograms. At the completion of Phase II, we have completed mechanical and electronics engineering design and achieved changes which currently enable future work in active articulation, enabling autonomous reconfiguration for a wide variety of terrains, including upside down operations (in case of flip over), and we have reduced platform weight by one third. Currently the vehicle weighs 10 kilograms and will grow marginally as additional actuation, MEMS based organic sensing, payload, and autonomous processing is added. The CSR vehicle"s modular spider-like configuration facilitates adaptation to many uses and compliance over rugged terrain. The developmental process and the vehicle characteristics will be discussed.
The application of SMA spring actuators to a lightweight modular compliant surface bioinspired robot
NASA Astrophysics Data System (ADS)
Stone, David L.; Cranney, John; Liang, Robert; Taya, Minoru
2004-07-01
The DARPA Sponsored Compliant Surface Robotics (CSR) program pursues development of a high mobility, lightweight, modular, morph-able robot for military forces in the field and for other industrial uses. The USTLAB and University of Washington Center for Intelligent Materials and Systems (CIMS) effort builds on USTLAB proof of concept feasibility studies and demonstration of a 4, 6, or 8 wheeled modular vehicle with articulated leg-wheel assemblies. A collaborative effort between USTLAB and UW-CIMS explored the application of Shape Memory Alloy Nickel Titanium Alloy springs to a leg extension actuator capable of actuating with 4.5 Newton force over a 50 mm stroke. At the completion of Phase II, we have completed mechanical and electronics engineering design and achieved conventional actuation which currently enable active articulation, enabling autonomous reconfiguration for a wide variety of terrains, including upside down operations (in case of flip over), have developed a leg extension actuator demonstration model, and we have positioned our team to pursue a small vehicle with leg extension actuators in follow on work. The CSR vehicle's modular spider-like configuration facilitates adaptation to many uses and compliance over rugged terrain. The developmental process, actuator and vehicle characteristics will be discussed.
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.
2011-12-01
Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2016-01-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contributes to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), that enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the following iterations. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. PMID:26419769
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W; Moritz, Robert L
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website.
It's All About the Data: Workflow Systems and Weather
NASA Astrophysics Data System (ADS)
Plale, B.
2009-05-01
Digital data is fueling new advances in the computational sciences, particularly geospatial research as environmental sensing grows more practical through reduced technology costs, broader network coverage, and better instruments. e-Science research (i.e., cyberinfrastructure research) has responded to data intensive computing with tools, systems, and frameworks that support computationally oriented activities such as modeling, analysis, and data mining. Workflow systems support execution of sequences of tasks on behalf of a scientist. These systems, such as Taverna, Apache ODE, and Kepler, when built as part of a larger cyberinfrastructure framework, give the scientist tools to construct task graphs of execution sequences, often through a visual interface for connecting task boxes together with arcs representing control flow or data flow. Unlike business processing workflows, scientific workflows expose a high degree of detail and control during configuration and execution. Data-driven science imposes unique needs on workflow frameworks. Our research is focused on two issues. The first is the support for workflow-driven analysis over all kinds of data sets, including real time streaming data and locally owned and hosted data. The second is the essential role metadata/provenance collection plays in data driven science, for discovery, determining quality, for science reproducibility, and for long-term preservation. The research has been conducted over the last 6 years in the context of cyberinfrastructure for mesoscale weather research carried out as part of the Linked Environments for Atmospheric Discovery (LEAD) project. LEAD has pioneered new approaches for integrating complex weather data, assimilation, modeling, mining, and cyberinfrastructure systems. Workflow systems have the potential to generate huge volumes of data. Without some form of automated metadata capture, either metadata description becomes largely a manual task that is difficult if not impossible under high-volume conditions, or the searchability and manageability of the resulting data products is disappointingly low. The provenance of a data product is a record of its lineage, or trace of the execution history that resulted in the product. The provenance of a forecast model result, e.g., captures information about the executable version of the model, configuration parameters, input data products, execution environment, and owner. Provenance enables data to be properly attributed and captures critical parameters about the model run so the quality of the result can be ascertained. Proper provenance is essential to providing reproducible scientific computing results. Workflow languages used in science discovery are complete programming languages, and in theory can support any logic expressible by a programming language. The execution environments supporting the workflow engines, on the other hand, are subject to constraints on physical resources, and hence in practice the workflow task graphs used in science utilize relatively few of the cataloged workflow patterns. It is important to note that these workflows are executed on demand, and are executed once. Into this context is introduced the need for science discovery that is responsive to real time information. If we can use simple programming models and abstractions to make scientific discovery involving real-time data accessible to specialists who share and utilize data across scientific domains, we bring science one step closer to solving the largest of human problems.
Polyphony: A Workflow Orchestration Framework for Cloud Computing
NASA Technical Reports Server (NTRS)
Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom
2010-01-01
Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.
DistMap: a toolkit for distributed short read mapping on a Hadoop cluster.
Pandey, Ram Vinay; Schlötterer, Christian
2013-01-01
With the rapid and steady increase of next generation sequencing data output, the mapping of short reads has become a major data analysis bottleneck. On a single computer, it can take several days to map the vast quantity of reads produced from a single Illumina HiSeq lane. In an attempt to ameliorate this bottleneck we present a new tool, DistMap - a modular, scalable and integrated workflow to map reads in the Hadoop distributed computing framework. DistMap is easy to use, currently supports nine different short read mapping tools and can be run on all Unix-based operating systems. It accepts reads in FASTQ format as input and provides mapped reads in a SAM/BAM format. DistMap supports both paired-end and single-end reads thereby allowing the mapping of read data produced by different sequencing platforms. DistMap is available from http://code.google.com/p/distmap/
DistMap: A Toolkit for Distributed Short Read Mapping on a Hadoop Cluster
Pandey, Ram Vinay; Schlötterer, Christian
2013-01-01
With the rapid and steady increase of next generation sequencing data output, the mapping of short reads has become a major data analysis bottleneck. On a single computer, it can take several days to map the vast quantity of reads produced from a single Illumina HiSeq lane. In an attempt to ameliorate this bottleneck we present a new tool, DistMap - a modular, scalable and integrated workflow to map reads in the Hadoop distributed computing framework. DistMap is easy to use, currently supports nine different short read mapping tools and can be run on all Unix-based operating systems. It accepts reads in FASTQ format as input and provides mapped reads in a SAM/BAM format. DistMap supports both paired-end and single-end reads thereby allowing the mapping of read data produced by different sequencing platforms. DistMap is available from http://code.google.com/p/distmap/ PMID:24009693
[Weighted gene co-expression network analysis in biomedicine research].
Liu, Wei; Li, Li; Ye, Hua; Tu, Wei
2017-11-25
High-throughput biological technologies are now widely applied in biology and medicine, allowing scientists to monitor thousands of parameters simultaneously in a specific sample. However, it is still an enormous challenge to mine useful information from high-throughput data. The emergence of network biology provides deeper insights into complex bio-system and reveals the modularity in tissue/cellular networks. Correlation networks are increasingly used in bioinformatics applications. Weighted gene co-expression network analysis (WGCNA) tool can detect clusters of highly correlated genes. Therefore, we systematically reviewed the application of WGCNA in the study of disease diagnosis, pathogenesis and other related fields. First, we introduced principle, workflow, advantages and disadvantages of WGCNA. Second, we presented the application of WGCNA in disease, physiology, drug, evolution and genome annotation. Then, we indicated the application of WGCNA in newly developed high-throughput methods. We hope this review will help to promote the application of WGCNA in biomedicine research.
A Closer Look at 804: A Summary of Considerations for DoD Program Managers
2011-12-01
aimed at changing the culture from one that is fo- cused typically on a single delivery to a new model that comprises multiple deliveries to es...under the Agency CIOs, and de - velop flexible budget models that align with modular development. • Launch an interactive platform for pre-RFP agency...permission@sei.cmu.edu. TM Carnegie Mellon Software Engineering Institute (stylized), Carnegie Mellon Software Engineering Institute (and de - sign), Simplex
LEGO® Bricks as Building Blocks for Centimeter-Scale Biological Environments: The Case of Plants
Lind, Kara R.; Sizmur, Tom; Benomar, Saida; Miller, Anthony; Cademartiri, Ludovico
2014-01-01
LEGO bricks are commercially available interlocking pieces of plastic that are conventionally used as toys. We describe their use to build engineered environments for cm-scale biological systems, in particular plant roots. Specifically, we take advantage of the unique modularity of these building blocks to create inexpensive, transparent, reconfigurable, and highly scalable environments for plant growth in which structural obstacles and chemical gradients can be precisely engineered to mimic soil. PMID:24963716
LEGO® bricks as building blocks for centimeter-scale biological environments: the case of plants.
Lind, Kara R; Sizmur, Tom; Benomar, Saida; Miller, Anthony; Cademartiri, Ludovico
2014-01-01
LEGO bricks are commercially available interlocking pieces of plastic that are conventionally used as toys. We describe their use to build engineered environments for cm-scale biological systems, in particular plant roots. Specifically, we take advantage of the unique modularity of these building blocks to create inexpensive, transparent, reconfigurable, and highly scalable environments for plant growth in which structural obstacles and chemical gradients can be precisely engineered to mimic soil.
Concise Review: Organ Engineering: Design, Technology, and Integration.
Kaushik, Gaurav; Leijten, Jeroen; Khademhosseini, Ali
2017-01-01
Engineering complex tissues and whole organs has the potential to dramatically impact translational medicine in several avenues. Organ engineering is a discipline that integrates biological knowledge of embryological development, anatomy, physiology, and cellular interactions with enabling technologies including biocompatible biomaterials and biofabrication platforms such as three-dimensional bioprinting. When engineering complex tissues and organs, core design principles must be taken into account, such as the structure-function relationship, biochemical signaling, mechanics, gradients, and spatial constraints. Technological advances in biomaterials, biofabrication, and biomedical imaging allow for in vitro control of these factors to recreate in vivo phenomena. Finally, organ engineering emerges as an integration of biological design and technical rigor. An overall workflow for organ engineering and guiding technology to advance biology as well as a perspective on necessary future iterations in the field is discussed. Stem Cells 2017;35:51-60. © 2016 AlphaMed Press.
Managing bioengineering complexity with AI techniques.
Beal, Jacob; Adler, Aaron; Yaman, Fusun
2016-10-01
Our capabilities for systematic design and engineering of biological systems are rapidly increasing. Effectively engineering such systems, however, requires the synthesis of a rapidly expanding and changing complex body of knowledge, protocols, and methodologies. Many of the problems in managing this complexity, however, appear susceptible to being addressed by artificial intelligence (AI) techniques, i.e., methods enabling computers to represent, acquire, and employ knowledge. Such methods can be employed to automate physical and informational "routine" work and thus better allow humans to focus their attention on the deeper scientific and engineering issues. This paper examines the potential impact of AI on the engineering of biological organisms through the lens of a typical organism engineering workflow. We identify a number of key opportunities for significant impact, as well as challenges that must be overcome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Computational Tools for Metabolic Engineering
Copeland, Wilbert B.; Bartley, Bryan A.; Chandran, Deepak; Galdzicki, Michal; Kim, Kyung H.; Sleight, Sean C.; Maranas, Costas D.; Sauro, Herbert M.
2012-01-01
A great variety of software applications are now employed in the metabolic engineering field. These applications have been created to support a wide range of experimental and analysis techniques. Computational tools are utilized throughout the metabolic engineering workflow to extract and interpret relevant information from large data sets, to present complex models in a more manageable form, and to propose efficient network design strategies. In this review, we present a number of tools that can assist in modifying and understanding cellular metabolic networks. The review covers seven areas of relevance to metabolic engineers. These include metabolic reconstruction efforts, network visualization, nucleic acid and protein engineering, metabolic flux analysis, pathway prospecting, post-structural network analysis and culture optimization. The list of available tools is extensive and we can only highlight a small, representative portion of the tools from each area. PMID:22629572
Simulating Operation of a Large Turbofan Engine
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Frederick, Dean K.; DeCastro, Jonathan
2008-01-01
The Commercial Modular Aero- Propulsion System Simulation (C-MAPSS) is a computer program for simulating transient operation of a commercial turbofan engine that can generate as much as 90,000 lb (.0.4 MN) of thrust. It includes a power-management system that enables simulation of open- or closed-loop engine operation over a wide range of thrust levels throughout the full range of flight conditions. C-MAPSS provides the user with a set of tools for performing open- and closed-loop transient simulations and comparison of linear and non-linear models throughout its operating envelope, in an easy-to-use graphical environment.
FAST Modularization Framework for Wind Turbine Simulation: Full-System Linearization
Jonkman, Jason M.; Jonkman, Bonnie J.
2016-10-03
The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well-established methods and tools for analyzing linear systems. Here, this paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.
FAST modularization framework for wind turbine simulation: full-system linearization
NASA Astrophysics Data System (ADS)
Jonkman, J. M.; Jonkman, B. J.
2016-09-01
The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well- established methods and tools for analyzing linear systems. This paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.
The development of a post-test diagnostic system for rocket engines
NASA Technical Reports Server (NTRS)
Zakrajsek, June F.
1991-01-01
An effort was undertaken by NASA to develop an automated post-test, post-flight diagnostic system for rocket engines. The automated system is designed to be generic and to automate the rocket engine data review process. A modular, distributed architecture with a generic software core was chosen to meet the design requirements. The diagnostic system is initially being applied to the Space Shuttle Main Engine data review process. The system modules currently under development are the session/message manager, and portions of the applications section, the component analysis section, and the intelligent knowledge server. An overview is presented of a rocket engine data review process, the design requirements and guidelines, the architecture and modules, and the projected benefits of the automated diagnostic system.
CONSTRUCTION OF MODULAR FIELD-BIOREACTOR FOR ACID MINE DRAINAGE TREATMENT
The paper focuses on the improvements to engineered features of a passive technology that has been used for remediation of acid rock drainage (ARD). This passive remedial technology, a sulfate-reducing bacteria (SRB) bioreactor, takes advantage of the ability of SRB that, if sup...
Verification testing of the Hydro International Up-Flo™ Filter with one filter module and CPZ Mix™ filter media was conducted at the Penn State Harrisburg Environmental Engineering Laboratory in Middletown, Pennsylvania. The Up-Flo™ Filter is designed as a passive, modular filtr...
Environmental engineering calculations involving uncertainties; either in the model itself or in the data, are far beyond the capabilities of conventional analysis for any but the simplest of models. There exist a number of general-purpose computer simulation languages, using Mon...
Computer-aided dental prostheses construction using reverse engineering.
Solaberrieta, E; Minguez, R; Barrenetxea, L; Sierra, E; Etxaniz, O
2014-01-01
The implementation of computer-aided design/computer-aided manufacturing (CAD/CAM) systems with virtual articulators, which take into account the kinematics, constitutes a breakthrough in the construction of customised dental prostheses. This paper presents a multidisciplinary protocol involving CAM techniques to produce dental prostheses. This protocol includes a step-by-step procedure using innovative reverse engineering technologies to transform completely virtual design processes into customised prostheses. A special emphasis is placed on a novel method that permits a virtual location of the models. The complete workflow includes the optical scanning of the patient, the use of reverse engineering software and, if necessary, the use of rapid prototyping to produce CAD temporary prostheses.
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Wilson, A.
2010-12-01
The Laboratory for Atmospheric and Space Physics at the University of Colorado has developed an Open Source, OPeNDAP compliant, Java Servlet based, RESTful web service to serve time series data. In addition to handling OPeNDAP style requests and returning standard responses, existing modules for alternate output formats can be reused or customized. It is also simple to reuse or customize modules to directly read various native data sources and even to perform some processing on the server. The server is built around a common data model based on the Unidata Common Data Model (CDM) which merges the NetCDF, HDF, and OPeNDAP data models. The server framework features a modular architecture that supports pluggable Readers, Writers, and Filters via the common interface to the data, enabling a workflow that reads data from their native form, performs some processing on the server, and presents the results to the client in its preferred form. The service is currently being used operationally to serve time series data for the LASP Interactive Solar Irradiance Data Center (LISIRD, http://lasp.colorado.edu/lisird/) and as part of the Time Series Data Server (TSDS, http://tsds.net/). I will present the data model and how it enables reading, writing, and processing concerns to be separated into loosely coupled components. I will also share thoughts for evolving beyond the time series abstraction and providing a general purpose data service that can be orchestrated into larger workflows.
Knowledge management in the engineering design environment
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2006-01-01
The Aerospace and Defense industry is experiencing an increasing loss of knowledge through workforce reductions associated with business consolidation and retirement of senior personnel. Significant effort is being placed on process definition as part of ISO certification and, more recently, CMMI certification. The process knowledge in these efforts represents the simplest of engineering knowledge and many organizations are trying to get senior engineers to write more significant guidelines, best practices and design manuals. A new generation of design software, known as Product Lifecycle Management systems, has many mechanisms for capturing and deploying a wider variety of engineering knowledge than simple process definitions. These hold the promise of significant improvements through reuse of prior designs, codification of practices in workflows, and placement of detailed how-tos at the point of application.
Development of a Turbofan Engine Simulation in a Graphical Simulation Environment
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Heui
2003-01-01
This paper presents the development of a generic component level model of a turbofan engine simulation with a digital controller, in an advanced graphical simulation environment. The goal of this effort is to develop and demonstrate a flexible simulation platform for future research in propulsion system control and diagnostic technology. A previously validated FORTRAN-based model of a modern, high-performance, military-type turbofan engine is being used to validate the platform development. The implementation process required the development of various innovative procedures, which are discussed in the paper. Open-loop and closed-loop comparisons are made between the two simulations. Future enhancements that are to be made to the modular engine simulation are summarized.
NASA Technical Reports Server (NTRS)
Miller, Christopher R.
2008-01-01
The usage and integrated vehicle health management of the NASA C-17. Propulsion health management flight objectives for the aircraft include mapping of the High Pressure Compressor in order to calibrate a Pratt and Whitney engine model and the fusion of data collected from existing sensors and signals to develop models, analysis methods and information fusion algorithms. An additional health manage flight objective is to demonstrate that the Commercial Modular Aero-Propulsion Systems Simulation engine model can successfully execute in real time onboard the C-17 T-1 aircraft using engine and aircraft flight data as inputs. Future work will address aircraft durability and aging, airframe health management, and propulsion health management research in the areas of gas path and engine vibration.
A modular positron camera for the study of industrial processes
NASA Astrophysics Data System (ADS)
Leadbeater, T. W.; Parker, D. J.
2011-10-01
Positron imaging techniques rely on the detection of the back-to-back annihilation photons arising from positron decay within the system under study. A standard technique, called positron emitting particle tracking (PEPT) [1], uses a number of these detected events to rapidly determine the position of a positron emitting tracer particle introduced into the system under study. Typical applications of PEPT are in the study of granular and multi-phase materials in the disciplines of engineering and the physical sciences. Using components from redundant medical PET scanners a modular positron camera has been developed. This camera consists of a number of small independent detector modules, which can be arranged in custom geometries tailored towards the application in question. The flexibility of the modular camera geometry allows for high photon detection efficiency within specific regions of interest, the ability to study large and bulky systems and the application of PEPT to difficult or remote processes as the camera is inherently transportable.
BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework
Wang, Qi; Sprague, Michael A.; Jonkman, Jason; ...
2017-03-14
Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less
BeamDyn: a high-fidelity wind turbine blade solver in the FAST modular framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qi; Sprague, Michael A.; Jonkman, Jason
Here, this paper presents a numerical implementation of the geometrically exact beam theory based on the Legendre-spectral-finite-element (LSFE) method. The displacement-based geometrically exact beam theory is presented, and the special treatment of three-dimensional rotation parameters is reviewed. An LSFE is a high-order finite element with nodes located at the Gauss-Legendre-Lobatto points. These elements can be an order of magnitude more computationally efficient than low-order finite elements for a given accuracy level. The new module, BeamDyn, is implemented in the FAST modularization framework for dynamic simulation of highly flexible composite-material wind turbine blades within the FAST aeroelastic engineering model. The frameworkmore » allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples are provided to validate BeamDyn and examine the LSFE performance as well as the coupling algorithm in the FAST modularization framework. BeamDyn can also be used as a stand-alone high-fidelity beam tool.« less
Fujita, Yuki; Ishikawa, Junya; Furuta, Hiroyuki; Ikawa, Yoshiya
2010-08-26
In vitro selection with long random RNA libraries has been used as a powerful method to generate novel functional RNAs, although it often requires laborious structural analysis of isolated RNA molecules. Rational RNA design is an attractive alternative to avoid this laborious step, but rational design of catalytic modules is still a challenging task. A hybrid strategy of in vitro selection and rational design has been proposed. With this strategy termed "design and selection," new ribozymes can be generated through installation of catalytic modules onto RNA scaffolds with defined 3D structures. This approach, the concept of which was inspired by the modular architecture of naturally occurring ribozymes, allows prediction of the overall architectures of the resulting ribozymes, and the structural modularity of the resulting ribozymes allows modification of their structures and functions. In this review, we summarize the design, generation, properties, and engineering of four classes of ligase ribozyme generated by design and selection.
Modular 3D-Printed Soil Gas Probes
NASA Astrophysics Data System (ADS)
Good, S. P.; Selker, J. S.; Al-Qqaili, F.; Lopez, M.; Kahel, L.
2016-12-01
ABSTRACT: Extraction of soil gas is required for a variety of applications in earth sciences and environmental engineering. However, commercially available probes can be costly and are typically limited to a single depth. Here, we present the open-source design and lab testing of a soil gas probe with modular capabilities that allow for the vertical stacking of gas extraction points at different depths in the soil column. The probe modules consist of a 3D printed spacer unit and hydrophobic gas permeable membrane made of high density Polyethylene with pore sizes 20-40 microns. Each of the modular spacer units contain both a gas extraction line and gas input line for the dilution of soil gases if needed. These 2-inch diameter probes can be installed in the field quickly with a hand auger and returned to at any frequency to extract soil gas from desired soil depths. The probes are tested through extraction of soil pore water vapors with distinct stable isotope ratios.
Modular adaptive implant based on smart materials.
Bîzdoacă, N; Tarniţă, Daniela; Tarniţă, D N
2008-01-01
Applications of biological methods and systems found in nature to the study and design of engineering systems and modern technology are defined as Bionics. The present paper describes a bionics application of shape memory alloy in construction of orthopedic implant. The main idea of this paper is related to design modular adaptive implants for fractured bones. In order to target the efficiency of medical treatment, the implant has to protect the fractured bone, for the healing period, undertaking much as is possible from the daily usual load of the healthy bones. After a particular stage of healing period is passed, using implant modularity, the load is gradually transferred to bone, assuring in this manner a gradually recover of bone function. The adaptability of this design is related to medical possibility of the physician to made the implant to correspond to patient specifically anatomy. Using a CT realistic numerical bone models, the mechanical simulation of different types of loading of the fractured bones treated with conventional method are presented. The results are commented and conclusions are formulated.
In-database processing of a large collection of remote sensing data: applications and implementation
NASA Astrophysics Data System (ADS)
Kikhtenko, Vladimir; Mamash, Elena; Chubarov, Dmitri; Voronina, Polina
2016-04-01
Large archives of remote sensing data are now available to scientists, yet the need to work with individual satellite scenes or product files constrains studies that span a wide temporal range or spatial extent. The resources (storage capacity, computing power and network bandwidth) required for such studies are often beyond the capabilities of individual geoscientists. This problem has been tackled before in remote sensing research and inspired several information systems. Some of them such as NASA Giovanni [1] and Google Earth Engine have already proved their utility for science. Analysis tasks involving large volumes of numerical data are not unique to Earth Sciences. Recent advances in data science are enabled by the development of in-database processing engines that bring processing closer to storage, use declarative query languages to facilitate parallel scalability and provide high-level abstraction of the whole dataset. We build on the idea of bridging the gap between file archives containing remote sensing data and databases by integrating files into relational database as foreign data sources and performing analytical processing inside the database engine. Thereby higher level query language can efficiently address problems of arbitrary size: from accessing the data associated with a specific pixel or a grid cell to complex aggregation over spatial or temporal extents over a large number of individual data files. This approach was implemented using PostgreSQL for a Siberian regional archive of satellite data products holding hundreds of terabytes of measurements from multiple sensors and missions taken over a decade-long span. While preserving the original storage layout and therefore compatibility with existing applications the in-database processing engine provides a toolkit for provisioning remote sensing data in scientific workflows and applications. The use of SQL - a widely used higher level declarative query language - simplifies interoperability between desktop GIS, web applications and geographic web services and interactive scientific applications (MATLAB, IPython). The system is also automatically ingesting direct readout data from meteorological and research satellites in near-real time with distributed acquisition workflows managed by Taverna workflow engine [2]. The system has demonstrated its utility in performing non-trivial analytic processing such as the computation of the Robust Satellite Technique (RST) indices [3]. It had been useful in different tasks such as studying urban heat islands, analyzing patterns in the distribution of wildfire occurrences, detecting phenomena related to seismic and earthquake activity. Initial experience has highlighted several limitations of the proposed approach yet it has demonstrated ability to facilitate the use of large archives of remote sensing data by geoscientists. 1. J.G. Acker, G. Leptoukh, Online analysis enhances use of NASA Earth science data. EOS Trans. AGU, 2007, 88(2), P. 14-17. 2. D. Hull, K. Wolsfencroft, R. Stevens, C. Goble, M.R. Pocock, P. Li and T. Oinn, Taverna: a tool for building and running workflows of services. Nucleic Acids Research. 2006. V. 34. P. W729-W732. 3. V. Tramutoli, G. Di Bello, N. Pergola, S. Piscitelli, Robust satellite techniques for remote sensing of seismically active areas // Annals of Geophysics. 2001. no. 44(2). P. 295-312.
Modern Technologies for Creating Synthetic Antibodies for Clinical application
Lebedenko, E. N.
2009-01-01
The modular structure and versatility of antibodies enables one to modify natural immunoglobulins in different ways for various clinical applications. Rational design and molecular engineering make it possible to directionally modify the molecular size, affinity, specificity, and immunogenicity and effector functions of an antibody, as well as to combine them with other functional agents. This review focuses on up-to-date methods of antibody engineering for diagnosing and treating various diseases, particularly on new technologies meant to refine the effector functions of therapeutic antibodies. PMID:22649585
1999-12-01
addition, the data files saved in the POINT format can include an optional header which is compatible with Amtec Engineering’s 2-D and 3-D visualization...34.DAT" file so that the file can be used directly by Amtec Engineering’s 2-D and 3-D visualization package Tecplot©. The ARRAY and POINT formats are
ORAC-DR: Pipelining With Other People's Code
NASA Astrophysics Data System (ADS)
Economou, Frossie; Bridger, Alan; Wright, Gillian S.; Jenness, Tim; Currie, Malcolm J.; Adamson, Andy
As part of the UKIRT ORAC project, we have developed a pipeline (orac-dr) for driving on-line data reduction using existing astronomical packages as algorithm engines and display tools. The design is modular and extensible on several levels, allowing it to be easily adapted to a wide variety of instruments. Here we briefly review the design, discuss the robustness and speed of execution issues inherent in such pipelines, and address what constitutes a desirable (in terms of ``buy-in'' effort) engine or tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savoie, M.J.; Schanche, G.W.; Mikucki, W.J.
This report provides technical information on modular solid-waste heat-recovery incinerators (HRIs), air-pollution regulations that apply to HRIs, air-pollutant emissions from currently marketed HRIs, and air-polution-control techniques for HRIs. The information will be useful to Army installations, Major Commands, and Corps of Engineers Districts that must plan and design HRI facilities.
2016-04-30
Proceedings Magazine , 138/7/7, 313. Holtta-Otto, K., & de Weck, O. (2007). Degree of modularity in engineering systems and products with technical and...ve ls (S T A N A G ) S elect adaptable system S elect optimized system S elect adaptable system if confident in a <= X% likelihood this
40 CFR 461.2 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY General Provisions § 461.2 General definitions. In...) “Battery” means a modular electric power source where part or all of the fuel is contained within the unit... heat cycle engine. In this regulation there is no differentiation between a single cell and a battery...
Designing for the ISD Life Cycle.
ERIC Educational Resources Information Center
Wallace, Guy W.; Hybert, Peter R.; Smith, Kelly R.; Blecke, Brian D.
2002-01-01
Outlines the recent criticisms of traditional ISD (Instructional Systems Design) and discusses the implications that impact the life cycle costs of T&D (Training and Development) projects and their ROI (Return On Investment) potential. Describes a modified approach to ISD which mimics the modular approach of systems engineering design.…
40 CFR 461.2 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS BATTERY MANUFACTURING POINT SOURCE CATEGORY General Provisions § 461.2 General definitions. In...) “Battery” means a modular electric power source where part or all of the fuel is contained within the unit... heat cycle engine. In this regulation there is no differentiation between a single cell and a battery...
NASA Technical Reports Server (NTRS)
Proctor, B. W.; Reysa, R. P.; Russell, D. J.
1975-01-01
Data collected for the appliances considered for the space station are presented along with plotted and tabulated trade study results for each appliance. The food management, and personal hygiene data are applicable to a six-man mission of 180-days.
Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset
2017-01-06
In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.
Future Software Sizing Metrics and Estimation Challenges
2011-07-01
systems 4. Ultrahigh software system assurance 5. Legacy maintenance and Brownfield development 6. Agile and Lean/ Kanban development. This paper...refined as the design of the maintenance modifications or Brownfield re-engineering is determined. VII. 6. AGILE AND LEAN/ KANBAN DEVELOPMENT The...difficulties of software maintenance estimation can often be mitigated by using lean workflow management techniques such as Kanban [25]. In Kanban
Protein nanoparticles are nontoxic, tuneable cell stressors.
de Pinho Favaro, Marianna Teixeira; Sánchez-García, Laura; Sánchez-Chardi, Alejandro; Roldán, Mónica; Unzueta, Ugutz; Serna, Naroa; Cano-Garrido, Olivia; Azzoni, Adriano Rodrigues; Ferrer-Miralles, Neus; Villaverde, Antonio; Vázquez, Esther
2018-02-01
Nanoparticle-cell interactions can promote cell toxicity and stimulate particular behavioral patterns, but cell responses to protein nanomaterials have been poorly studied. By repositioning oligomerization domains in a simple, modular self-assembling protein platform, we have generated closely related but distinguishable homomeric nanoparticles. Composed by building blocks with modular domains arranged in different order, they share amino acid composition. These materials, once exposed to cultured cells, are differentially internalized in absence of toxicity and trigger distinctive cell adaptive responses, monitored by the emission of tubular filopodia and enhanced drug sensitivity. The capability to rapidly modulate such cell responses by conventional protein engineering reveals protein nanoparticles as tuneable, versatile and potent cell stressors for cell-targeted conditioning.
Servicer system demonstration plan and capability development
NASA Technical Reports Server (NTRS)
1987-01-01
An orbital maneuvering vehicle (OMV) front end kit is defined which is capable of performing in-situ fluid resupply and modular maintenance of free flying spacecraft based on the integrated orbital servicing system (IOSS) concept. The compatibility of the IOSS to perform gas and fluid umbilical connect and disconnect functions utilizing connect systems currently available or in development is addressed. A series of tasks involving on-orbit servicing and the engineering test unit (ETU) of the on-orbit service were studied. The objective is the advancement of orbital servicing by expanding the Spacecraft Servicing Demonstration Plan (SSDP) to include detail demonstration planning using the Multimission Modular Spacecraft (MMS) and upgrading the ETU control.
Progress toward Modular UAS for Geoscience Applications
NASA Astrophysics Data System (ADS)
Dahlgren, R. P.; Clark, M. A.; Comstock, R. J.; Fladeland, M.; Gascot, H., III; Haig, T. H.; Lam, S. J.; Mazhari, A. A.; Palomares, R. R.; Pinsker, E. A.; Prathipati, R. T.; Sagaga, J.; Thurling, J. S.; Travers, S. V.
2017-12-01
Small Unmanned Aerial Systems (UAS) have become accepted tools for geoscience, ecology, agriculture, disaster response, land management, and industry. A variety of consumer UAS options exist as science and engineering payload platforms, but their incompatibilities with one another contribute to high operational costs compared with those of piloted aircraft. This research explores the concept of modular UAS, demonstrating airframes that can be reconfigured in the field for experimental optimization, to enable multi-mission support, facilitate rapid repair, or respond to changing field conditions. Modular UAS is revolutionary in allowing aircraft to be optimized around the payload, reversing the conventional wisdom of designing the payload to accommodate an unmodifiable aircraft. UAS that are reconfigurable like Legos™ are ideal for airborne science service providers, system integrators, instrument designers and end users to fulfill a wide range of geoscience experiments. Modular UAS facilitate the adoption of open-source software and rapid prototyping technology where design reuse is important in the context of a highly regulated industry like aerospace. The industry is now at a stage where consolidation, acquisition, and attrition will reduce the number of small manufacturers, with a reduction of innovation and motivation to reduce costs. Modularity leads to interface specifications, which can evolve into de facto or formal standards which contain minimum (but sufficient) details such that multiple vendors can then design to those standards and demonstrate interoperability. At that stage, vendor coopetition leads to robust interface standards, interoperability standards and multi-source agreements which in turn drive costs down significantly.
The Symbiotic Relationship between Scientific Workflow and Provenance (Invited)
NASA Astrophysics Data System (ADS)
Stephan, E.
2010-12-01
The purpose of this presentation is to describe the symbiotic nature of scientific workflows and provenance. We will also discuss the current trends and real world challenges facing these two distinct research areas. Although motivated differently, the needs of the international science communities are the glue that binds this relationship together. Understanding and articulating the science drivers to these communities is paramount as these technologies evolve and mature. Originally conceived for managing business processes, workflows are now becoming invaluable assets in both computational and experimental sciences. These reconfigurable, automated systems provide essential technology to perform complex analyses by coupling together geographically distributed disparate data sources and applications. As a result, workflows are capable of higher throughput in a shorter amount of time than performing the steps manually. Today many different workflow products exist; these could include Kepler and Taverna or similar products like MeDICI, developed at PNNL, that are standardized on the Business Process Execution Language (BPEL). Provenance, originating from the French term Provenir “to come from”, is used to describe the curation process of artwork as art is passed from owner to owner. The concept of provenance was adopted by digital libraries as a means to track the lineage of documents while standards such as the DublinCore began to emerge. In recent years the systems science community has increasingly expressed the need to expand the concept of provenance to formally articulate the history of scientific data. Communities such as the International Provenance and Annotation Workshop (IPAW) have formalized a provenance data model. The Open Provenance Model, and the W3C is hosting a provenance incubator group featuring the Proof Markup Language. Although both workflows and provenance have risen from different communities and operate independently, their mutual success is tied together, forming a symbiotic relationship where research and development advances in one effort can provide tremendous benefits to the other. For example, automating provenance extraction within scientific applications is still a relatively new concept; the workflow engine provides the framework to capture application specific operations, inputs, and resulting data. It provides a description of the process history and data flow by wrapping workflow components around the applications and data sources. On the other hand, a lack of cooperation between workflows and provenance can inhibit usefulness of both to science. Blindly tracking the execution history without having a true understanding of what kinds of questions end users may have makes the provenance indecipherable to the target users. Over the past nine years PNNL has been actively involved in provenance research in support of computational chemistry, molecular dynamics, biology, hydrology, and climate. PNNL has also been actively involved in efforts by the international community to develop open standards for provenance and the development of architectures to support provenance capture, storage, and querying. This presentation will provide real world use cases of how provenance and workflow can be leveraged and implemented to meet different needs and the challenges that lie ahead.
The Sargassum Early Advisory System (SEAS)
NASA Astrophysics Data System (ADS)
Armstrong, D.; Gallegos, S. C.
2016-02-01
The Sargassum Early Advisory System (SEAS) web-app was designed to automatically detect Sargassum at sea, forecast movement of the seaweed, and alert users of potential landings. Inspired to help address the economic hardships caused by large landings of Sargassum, the web app automates and enhances the manual tasks conducted by the SEAS group of Texas A&M University at Galveston. The SEAS web app is a modular, mobile-friendly tool that automates the entire workflow from data acquisition to user management. The modules include: 1) an Imagery Retrieval Module to automatically download Landsat-8 Operational Land Imagery (OLI) from the United States Geological Survey (USGS), 2) a Processing Module for automatic detection of Sargassum in the OLI imagery, and subsequent mapping of theses patches in the HYCOM grid, producing maps that show Sargassum clusters; 3) a Forecasting engine fed by the HYbrid Coordinate Ocean Model (HYCOM) model currents and winds from weather buoys; and 4) a mobile phone optimized geospatial user interface. The user can view the last known position of Sargassum clusters, trajectory and location projections for the next 24, 72 and 168 hrs. Users can also subscribe to alerts generated for particular areas. Currently, the SEAS web app produces advisories for Texas beaches. The forecasted Sargassum landing locations are validated by reports from Texas beach managers. However, the SEAS web app was designed to easily expand to other areas, and future plans call for extending the SEAS web app to Mexico and the Caribbean islands. The SEAS web app development is led by NASA, with participation by ASRC Federal/Computer Science Corporation, and the Naval Research Laboratory, all at Stennis Space Center, and Texas A&M University at Galveston.
NASA Astrophysics Data System (ADS)
Nelson, J.; Ames, D. P.; Jones, N.; Tarboton, D. G.; Li, Z.; Qiao, X.; Crawley, S.
2016-12-01
As water resources data continue to move to the web in the form of well-defined, open access, machine readable web services provided by government, academic, and private institutions, there is increased opportunity to move additional parts of the water science workflow to the web (e.g. analysis, modeling, decision support, and collaboration.) Creating such web-based functionality can be extremely time-consuming and resource-intensive and can lead the erstwhile water scientist down a veritable cyberinfrastructure rabbit hole, through an unintended tunnel of transformation to become a Cyber-Wonderland software engineer. We posit that such transformations were never the intention of the research programs that fund earth science cyberinfrastructure, nor is it in the best interest of water researchers to spend exorbitant effort developing and deploying such technologies. This presentation will introduce a relatively simple and ready-to-use water science web app environment funded by the National Science Foundation that couples the new HydroShare data publishing system with the Tethys Platform web app development toolkit. The coupled system has already been shown to greatly lower the barrier to deploying of web based visualization and analysis tools for the CUAHSI Water Data Center and for the National Weather Service's National Water Model. The design and implementation of the developed web app architecture will be presented together key examples of existing apps created using this system. In each of the cases presented, water resources students with basic programming skills were able to develop and deploy highly functional web apps in a relatively short period of time (days to weeks) - allowing the focus to remain on water science rather on cyberinfrastructure. This presentation is accompanied by an open invitation for new collaborations that use the HydroShare-Tethys web app environment.
A versatile modular bioreactor platform for Tissue Engineering
Schuerlein, Sebastian; Schwarz, Thomas; Krziminski, Steffan; Gätzner, Sabine; Hoppensack, Anke; Schwedhelm, Ivo; Schweinlin, Matthias; Walles, Heike
2016-01-01
Abstract Tissue Engineering (TE) bears potential to overcome the persistent shortage of donor organs in transplantation medicine. Additionally, TE products are applied as human test systems in pharmaceutical research to close the gap between animal testing and the administration of drugs to human subjects in clinical trials. However, generating a tissue requires complex culture conditions provided by bioreactors. Currently, the translation of TE technologies into clinical and industrial applications is limited due to a wide range of different tissue‐specific, non‐disposable bioreactor systems. To ensure a high level of standardization, a suitable cost‐effectiveness, and a safe graft production, a generic modular bioreactor platform was developed. Functional modules provide robust control of culture processes, e.g. medium transport, gas exchange, heating, or trapping of floating air bubbles. Characterization revealed improved performance of the modules in comparison to traditional cell culture equipment such as incubators, or peristaltic pumps. By combining the modules, a broad range of culture conditions can be achieved. The novel bioreactor platform allows using disposable components and facilitates tissue culture in closed fluidic systems. By sustaining native carotid arteries, engineering a blood vessel, and generating intestinal tissue models according to a previously published protocol the feasibility and performance of the bioreactor platform was demonstrated. PMID:27492568
Blanco-Claraco, José Luis; López-Martínez, Javier; Torres-Moreno, José Luis; Giménez-Fernández, Antonio
2015-01-01
Most experimental fields of science and engineering require the use of data acquisition systems (DAQ), devices in charge of sampling and converting electrical signals into digital data and, typically, performing all of the required signal preconditioning. Since commercial DAQ systems are normally focused on specific types of sensors and actuators, systems engineers may need to employ mutually-incompatible hardware from different manufacturers in applications demanding heterogeneous inputs and outputs, such as small-signal analog inputs, differential quadrature rotatory encoders or variable current outputs. A common undesirable side effect of heterogeneous DAQ hardware is the lack of an accurate synchronization between samples captured by each device. To solve such a problem with low-cost hardware, we present a novel modular DAQ architecture comprising a base board and a set of interchangeable modules. Our main design goal is the ability to sample all sources at predictable, fixed sampling frequencies, with a reduced synchronization mismatch (<1 μs) between heterogeneous signal sources. We present experiments in the field of mechanical engineering, illustrating vibration spectrum analyses from piezoelectric accelerometers and, as a novelty in these kinds of experiments, the spectrum of quadrature encoder signals. Part of the design and software will be publicly released online. PMID:26516865
Assembly of tissue engineered blood vessels with spatially-controlled heterogeneities.
Strobel, Hannah A; Hookway, Tracy; Piola, Marco; Fiore, Gianfranco Beniamino; Soncini, Monica; Alsberg, Eben; Rolle, Marsha
2018-05-04
Tissue-engineered human blood vessels may enable in vitro disease modeling and drug screening to accelerate advances in vascular medicine. Existing methods for tissue engineered blood vessel (TEBV) fabrication create homogenous tubes not conducive to modeling the focal pathologies characteristic of vascular disease. We developed a system for generating self-assembled human smooth muscle cell ring-units, which were fused together into TEBVs. The goal of this study was to assess the feasibility of modular assembly and fusion of ring building units to fabricate spatially-controlled, heterogeneous tissue tubes. We first aimed to enhance fusion and reduce total culture time, and determined that reducing ring pre-culture duration improved tube fusion. Next, we incorporated electrospun polymer ring units onto tube ends as reinforced extensions, which allowed us to cannulate tubes after only 7 days of fusion, and culture tubes with luminal flow in a custom bioreactor. To create focal heterogeneities, we incorporated gelatin microspheres into select ring units during self-assembly, and fused these rings between ring units without microspheres. Cells within rings maintained their spatial position within tissue tubes after fusion. This work describes a platform approach for creating modular TEBVs with spatially-defined structural heterogeneities, which may ultimately be applied to mimic focal diseases such as intimal hyperplasia or aneurysm.
NRC Reviewer Aid for Evaluating the Human Factors Engineering Aspects of Small Modular Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
OHara J. M.; Higgins, J.C.
Small modular reactors (SMRs) are a promising approach to meeting future energy needs. Although the electrical output of an individual SMR is relatively small compared to that of typical commercial nuclear plants, they can be grouped to produce as much energy as a utility demands. Furthermore, SMRs can be used for other purposes, such as producing hydrogen and generating process heat. The design characteristics of many SMRs differ from those of current conventional plants and may require a distinct concept of operations (ConOps). The U.S. Nuclear Regulatory Commission (NRC) conducted research to examine the human factors engineering (HFE) and themore » operational aspects of SMRs. The research identified thirty potential human-performance issues that should be considered in the NRC's reviews of SMR designs and in future research activities. The purpose of this report is to support NRC HFE reviewers of SMR applications by identifying some of the questions that can be asked of applicants whose designs have characteristics identified in the issues. The questions for each issue were identified and organized based on the review elements and guidance contained in Chapter 18 of the Standard Review Plan (NUREG-0800), and the Human Factors Engineering Program Review Model (NUREG-0711).« less
Leonard, Sean P; Perutka, Jiri; Powell, J Elijah; Geng, Peng; Richhart, Darby D; Byrom, Michelle; Kar, Shaunak; Davies, Bryan W; Ellington, Andrew D; Moran, Nancy A; Barrick, Jeffrey E
2018-05-18
Engineering the bacteria present in animal microbiomes promises to lead to breakthroughs in medicine and agriculture, but progress is hampered by a dearth of tools for genetically modifying the diverse species that comprise these communities. Here we present a toolkit of genetic parts for the modular construction of broad-host-range plasmids built around the RSF1010 replicon. Golden Gate assembly of parts in this toolkit can be used to rapidly test various antibiotic resistance markers, promoters, fluorescent reporters, and other coding sequences in newly isolated bacteria. We demonstrate the utility of this toolkit in multiple species of Proteobacteria that are native to the gut microbiomes of honey bees ( Apis mellifera) and bumble bees (B ombus sp.). Expressing fluorescent proteins in Snodgrassella alvi, Gilliamella apicola, Bartonella apis, and Serratia strains enables us to visualize how these bacteria colonize the bee gut. We also demonstrate CRISPRi repression in B. apis and use Cas9-facilitated knockout of an S. alvi adhesion gene to show that it is important for colonization of the gut. Beyond characterizing how the gut microbiome influences the health of these prominent pollinators, this bee microbiome toolkit (BTK) will be useful for engineering bacteria found in other natural microbial communities.
A versatile modular bioreactor platform for Tissue Engineering.
Schuerlein, Sebastian; Schwarz, Thomas; Krziminski, Steffan; Gätzner, Sabine; Hoppensack, Anke; Schwedhelm, Ivo; Schweinlin, Matthias; Walles, Heike; Hansmann, Jan
2017-02-01
Tissue Engineering (TE) bears potential to overcome the persistent shortage of donor organs in transplantation medicine. Additionally, TE products are applied as human test systems in pharmaceutical research to close the gap between animal testing and the administration of drugs to human subjects in clinical trials. However, generating a tissue requires complex culture conditions provided by bioreactors. Currently, the translation of TE technologies into clinical and industrial applications is limited due to a wide range of different tissue-specific, non-disposable bioreactor systems. To ensure a high level of standardization, a suitable cost-effectiveness, and a safe graft production, a generic modular bioreactor platform was developed. Functional modules provide robust control of culture processes, e.g. medium transport, gas exchange, heating, or trapping of floating air bubbles. Characterization revealed improved performance of the modules in comparison to traditional cell culture equipment such as incubators, or peristaltic pumps. By combining the modules, a broad range of culture conditions can be achieved. The novel bioreactor platform allows using disposable components and facilitates tissue culture in closed fluidic systems. By sustaining native carotid arteries, engineering a blood vessel, and generating intestinal tissue models according to a previously published protocol the feasibility and performance of the bioreactor platform was demonstrated. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modular and selective biosynthesis of gasoline-range alkanes.
Sheppard, Micah J; Kunjapur, Aditya M; Prather, Kristala L J
2016-01-01
Typical renewable liquid fuel alternatives to gasoline are not entirely compatible with current infrastructure. We have engineered Escherichia coli to selectively produce alkanes found in gasoline (propane, butane, pentane, heptane, and nonane) from renewable substrates such as glucose or glycerol. Our modular pathway framework achieves carbon-chain extension by two different mechanisms. A fatty acid synthesis route is used to generate longer chains heptane and nonane, while a more energy efficient alternative, reverse-β-oxidation, is used for synthesis of propane, butane, and pentane. We demonstrate that both upstream (thiolase) and intermediate (thioesterase) reactions can act as control points for chain-length specificity. Specific free fatty acids are subsequently converted to alkanes using a broad-specificity carboxylic acid reductase and a cyanobacterial aldehyde decarbonylase (AD). The selectivity obtained by different module pairings provides a foundation for tuning alkane product distribution for desired fuel properties. Alternate ADs that have greater activity on shorter substrates improve observed alkane titer. However, even in an engineered host strain that significantly reduces endogenous conversion of aldehyde intermediates to alcohol byproducts, AD activity is observed to be limiting for all chain lengths. Given these insights, we discuss guiding principles for pathway selection and potential opportunities for pathway improvement. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Effects of unique biomedical education programs for engineers: REDEEM and ESTEEM projects.
Matsuki, Noriaki; Takeda, Motohiro; Yamano, Masahiro; Imai, Yohsuke; Ishikawa, Takuji; Yamaguchi, Takami
2009-06-01
Current engineering applications in the medical arena are extremely progressive. However, it is rather difficult for medical doctors and engineers to discuss issues because they do not always understand one another's jargon or ways of thinking. Ideally, medical engineers should become acquainted with medicine, and engineers should be able to understand how medical doctors think. Tohoku University in Japan has managed a number of unique reeducation programs for working engineers. Recurrent Education for the Development of Engineering Enhanced Medicine has been offered as a basic learning course since 2004, and Education through Synergetic Training for Engineering Enhanced Medicine has been offered as an advanced learning course since 2006. These programs, which were developed especially for engineers, consist of interactive, modular, and disease-based lectures (case studies) and substantial laboratory work. As a result of taking these courses, all students obtained better objective outcomes, on tests, and subjective outcomes, through student satisfaction. In this article, we report on our unique biomedical education programs for engineers and their effects on working engineers.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy
Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.
2014-01-01
This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962
Workflow-Based Software Development Environment
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
2013-01-01
The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment
Task Management in the New ATLAS Production System
NASA Astrophysics Data System (ADS)
De, K.; Golubkov, D.; Klimentov, A.; Potekhin, M.; Vaniachine, A.; Atlas Collaboration
2014-06-01
This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.
Design and implementation of an internet-based electrical engineering laboratory.
He, Zhenlei; Shen, Zhangbiao; Zhu, Shanan
2014-09-01
This paper describes an internet-based electrical engineering laboratory (IEE-Lab) with virtual and physical experiments at Zhejiang University. In order to synthesize the advantages of both experiment styles, the IEE-Lab is come up with Client/Server/Application framework and combines the virtual and physical experiments. The design and workflow of IEE-Lab are introduced. The analog electronic experiment is taken as an example to show Flex plug-in design, data communication based on XML (Extensible Markup Language), experiment simulation modeled by Modelica and control terminals' design. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Genetic Design Automation: engineering fantasy or scientific renewal?
Lux, Matthew W.; Bramlett, Brian W.; Ball, David A.; Peccoud, Jean
2013-01-01
Synthetic biology aims to make genetic systems more amenable to engineering, which has naturally led to the development of Computer-Aided Design (CAD) tools. Experimentalists still primarily rely on project-specific ad-hoc workflows instead of domain-specific tools, suggesting that CAD tools are lagging behind the front line of the field. Here, we discuss the scientific hurdles that have limited the productivity gains anticipated from existing tools. We argue that the real value of efforts to develop CAD tools is the formalization of genetic design rules that determine the complex relationships between genotype and phenotype. PMID:22001068
Development of CFD model for augmented core tripropellant rocket engine
NASA Astrophysics Data System (ADS)
Jones, Kenneth M.
1994-10-01
The Space Shuttle era has made major advances in technology and vehicle design to the point that the concept of a single-stage-to-orbit (SSTO) vehicle appears more feasible. NASA presently is conducting studies into the feasibility of certain advanced concept rocket engines that could be utilized in a SSTO vehicle. One such concept is a tripropellant system which burns kerosene and hydrogen initially and at altitude switches to hydrogen. This system will attain a larger mass fraction because LOX-kerosene engines have a greater average propellant density and greater thrust-to-weight ratio. This report describes the investigation to model the tripropellant augmented core engine. The physical aspects of the engine, the CFD code employed, and results of the numerical model for a single modular thruster are discussed.
Automated multiplex genome-scale engineering in yeast
Si, Tong; Chao, Ran; Min, Yuhao; Wu, Yuying; Ren, Wen; Zhao, Huimin
2017-01-01
Genome-scale engineering is indispensable in understanding and engineering microorganisms, but the current tools are mainly limited to bacterial systems. Here we report an automated platform for multiplex genome-scale engineering in Saccharomyces cerevisiae, an important eukaryotic model and widely used microbial cell factory. Standardized genetic parts encoding overexpression and knockdown mutations of >90% yeast genes are created in a single step from a full-length cDNA library. With the aid of CRISPR-Cas, these genetic parts are iteratively integrated into the repetitive genomic sequences in a modular manner using robotic automation. This system allows functional mapping and multiplex optimization on a genome scale for diverse phenotypes including cellulase expression, isobutanol production, glycerol utilization and acetic acid tolerance, and may greatly accelerate future genome-scale engineering endeavours in yeast. PMID:28469255
Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.
2006-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.
Modular magazine for suitable handling of microparts in industry
NASA Astrophysics Data System (ADS)
Grimme, Ralf; Schmutz, Wolfgang; Schlenker, Dirk; Schuenemann, Matthias; Stock, Achim; Schaefer, Wolfgang
1998-01-01
Microassembly and microadjustment techniques are key technologies in the industrial production of hybrid microelectromechanical systems. One focal point in current microproduction research and engineering is the design and development of high-precision microassembly and microadjustment equipment capable of operating within the framework of flexible automated industrial production. As well as these developments, suitable microassembly tools for industrial use also need to be equipped with interfaces for the supply and delivery of microcomponents. The microassembly process necessitates the supply of microparts in a geometrically defined manner. In order to reduce processing steps and production costs, there is a demand for magazines capable of providing free accessibility to the fixed microcomponents. Commonly used at present are feeding techniques, which originate from the field of semiconductor production. However none of these techniques fully meets the requirements of industrial microassembly technology. A novel modular magazine set, developed and tested in a joint project, is presented here. The magazines are able to hold microcomponents during cleaning, inspection and assembly without nay additional handling steps. The modularity of their design allows for maximum technical flexibility. The modular magazine fits into currently practiced SEMI standards. The design and concept of the magazine enables industrial manufacturers to promote a cost-efficient and flexible precision assembly of microelectromechanical systems.
Modular Closed-Loop Control of Diabetes
Magni, L.; Dassau, E.; Hughes-Karvetski, C.; Toffanin, C.; De Nicolao, G.; Del Favero, S.; Breton, M.; Man, C. Dalla; Renard, E.; Zisser, H.; Doyle, F. J.; Cobelli, C.; Kovatchev, B. P.
2015-01-01
Modularity plays a key role in many engineering systems, allowing for plug-and-play integration of components, enhancing flexibility and adaptability, and facilitating standardization. In the control of diabetes, i.e., the so-called “artificial pancreas,” modularity allows for the step-wise introduction of (and regulatory approval for) algorithmic components, starting with subsystems for assured patient safety and followed by higher layer components that serve to modify the patient’s basal rate in real time. In this paper, we introduce a three-layer modular architecture for the control of diabetes, consisting in a sensor/pump interface module (IM), a continuous safety module (CSM), and a real-time control module (RTCM), which separates the functions of insulin recommendation (postmeal insulin for mitigating hyperglycemia) and safety (prevention of hypoglycemia). In addition, we provide details of instances of all three layers of the architecture: the APS© serving as the IM, the safety supervision module (SSM) serving as the CSM, and the range correction module (RCM) serving as the RTCM. We evaluate the performance of the integrated system via in silico preclinical trials, demonstrating 1) the ability of the SSM to reduce the incidence of hypoglycemia under nonideal operating conditions and 2) the ability of the RCM to reduce glycemic variability. PMID:22481809
Casini, Arturo; MacDonald, James T.; Jonghe, Joachim De; Christodoulou, Georgia; Freemont, Paul S.; Baldwin, Geoff S.; Ellis, Tom
2014-01-01
Overlap-directed DNA assembly methods allow multiple DNA parts to be assembled together in one reaction. These methods, which rely on sequence homology between the ends of DNA parts, have become widely adopted in synthetic biology, despite being incompatible with a key principle of engineering: modularity. To answer this, we present MODAL: a Modular Overlap-Directed Assembly with Linkers strategy that brings modularity to overlap-directed methods, allowing assembly of an initial set of DNA parts into a variety of arrangements in one-pot reactions. MODAL is accompanied by a custom software tool that designs overlap linkers to guide assembly, allowing parts to be assembled in any specified order and orientation. The in silico design of synthetic orthogonal overlapping junctions allows for much greater efficiency in DNA assembly for a variety of different methods compared with using non-designed sequence. In tests with three different assembly technologies, the MODAL strategy gives assembly of both yeast and bacterial plasmids, composed of up to five DNA parts in the kilobase range with efficiencies of between 75 and 100%. It also seamlessly allows mutagenesis to be performed on any specified DNA parts during the process, allowing the one-step creation of construct libraries valuable for synthetic biology applications. PMID:24153110
Fediai, Artem; Ryndyk, Dmitry A; Cuniberti, Gianaurelio
2016-10-05
Up to now, the electrical properties of the contacts between 3D metals and 2D materials have never been computed at a fully ab initio level due to the huge number of atomic orbitals involved in a current path from an electrode to a pristine 2D material. As a result, there are still numerous open questions and controversial theories on the electrical properties of systems with 3D/2D interfaces-for example, the current path and the contact length scalability. Our work provides a first-principles solution to this long-standing problem with the use of the modular approach, a method which rigorously combines a Green function formalism with the density functional theory (DFT) for this particular contact type. The modular approach is a general approach valid for any 3D/2D contact. As an example, we apply it to the most investigated among 3D/2D contacts-metal/graphene contacts-and show its abilities and consistency by comparison with existing experimental data. As it is applicable to any 3D/2D interface, the modular approach allows the engineering of 3D/2D contacts with the pre-defined electrical properties.
Split green fluorescent protein as a modular binding partner for protein crystallization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Hau B.; Hung, Li-Wei; Yeates, Todd O.
2013-12-01
A strategy using a new split green fluorescent protein (GFP) as a modular binding partner to form stable protein complexes with a target protein is presented. The modular split GFP may open the way to rapidly creating crystallization variants. A modular strategy for protein crystallization using split green fluorescent protein (GFP) as a crystallization partner is demonstrated. Insertion of a hairpin containing GFP β-strands 10 and 11 into a surface loop of a target protein provides two chain crossings between the target and the reconstituted GFP compared with the single connection afforded by terminal GFP fusions. This strategy was testedmore » by inserting this hairpin into a loop of another fluorescent protein, sfCherry. The crystal structure of the sfCherry-GFP(10–11) hairpin in complex with GFP(1–9) was determined at a resolution of 2.6 Å. Analysis of the complex shows that the reconstituted GFP is attached to the target protein (sfCherry) in a structurally ordered way. This work opens the way to rapidly creating crystallization variants by reconstituting a target protein bearing the GFP(10–11) hairpin with a variety of GFP(1–9) mutants engineered for favorable crystallization.« less
MACOP modular architecture with control primitives
Waegeman, Tim; Hermans, Michiel; Schrauwen, Benjamin
2013-01-01
Walking, catching a ball and reaching are all tasks in which humans and animals exhibit advanced motor skills. Findings in biological research concerning motor control suggest a modular control hierarchy which combines movement/motor primitives into complex and natural movements. Engineers inspire their research on these findings in the quest for adaptive and skillful control for robots. In this work we propose a modular architecture with control primitives (MACOP) which uses a set of controllers, where each controller becomes specialized in a subregion of its joint and task-space. Instead of having a single controller being used in this subregion [such as MOSAIC (modular selection and identification for control) on which MACOP is inspired], MACOP relates more to the idea of continuously mixing a limited set of primitive controllers. By enforcing a set of desired properties on the mixing mechanism, a mixture of primitives emerges unsupervised which successfully solves the control task. We evaluate MACOP on a numerical model of a robot arm by training it to generate desired trajectories. We investigate how the tracking performance is affected by the number of controllers in MACOP and examine how the individual controllers and their generated control primitives contribute to solving the task. Furthermore, we show how MACOP compensates for the dynamic effects caused by a fixed control rate and the inertia of the robot. PMID:23888140
Toward solar biodiesel production from CO2 using engineered cyanobacteria.
Woo, Han Min; Lee, Hyun Jeong
2017-05-01
Metabolic engineering of cyanobacteria has received attention as a sustainable strategy to convert carbon dioxide to various biochemicals including fatty acid-derived biodiesel. Recently, Synechococcus elongatus PCC 7942, a model cyanobacterium, has been engineered to convert CO2 to fatty acid ethyl esters (FAEEs) as biodiesel. Modular pathway has been constructed for FAEE production. Several metabolic engineering strategies were discussed to improve the production levels of FAEEs, including host engineering by improving CO2 fixation rate and photosynthetic efficiency. In addition, protein engineering of key enzyme in S. elongatus PCC 7942 was implemented to address issues on FAEE secretions toward sustainable FAEE production from CO2. Finally, advanced metabolic engineering will promote developing biosolar cell factories to convert CO2 to feasible amount of FAEEs toward solar biodiesel. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded
NASA Technical Reports Server (NTRS)
Culley, Dennis
2010-01-01
Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders
NASA Technical Reports Server (NTRS)
Borowski, S.; Clark, J.; Sefcik, R.; Corban, R.; Alexander, S.
1995-01-01
The results of integrated systems and mission studies are presented which quantify the benefits and rationale for developing a common, modular lunar/Mars space transportation system (STS) based on nuclear thermal rocket (NTR) technology. At present NASA's Exploration Program Office (ExPO) is considering chemical propulsion for an 'early return to the Moon' and NTR propulsion for the more demanding Mars missions to follow. The time and cost to develop these multiple systems are expected to be significant. The Nuclear Propulsion Office (NPO) has examined a variety of lunar and Mars missions and heavy lift launch vehicle (HLLV) options in an effort to determine a 'standardized' set of engine and stage components capable of satisfying a wide range of Space Exploration Initiative (SEI) missions. By using these components in a 'building block' fashion, a variety of single and multi-engine lunar and Mars vehicles can be configured. For NASA's 'First Lunar Outpost' (FLO) mission, an expendable NTR stage powered by two 50 klbf engines can deliver approximately 96 metric tons (t) to translunar injection (TLI) conditions for an initial mass in low earth orbit (IMLEO) of approximately 198 t compared to 250 t for a cryogenic chemical TLI stage. The NTR stage liquid hydrogen (LH2) tank has a 10 m diameter, 14.5 m length, and 66 t LH2 capacity. The NTR utilizes a UC-ZrC-graphite 'composite' fuel with a specific impulse (Isp) capability of approximately 900 s and an engine thrust-to-weight ratio of approximately 4.3. By extending the size and LH2 capacity of the lunar NTR stage to approximately 20 m and 96 t, respectively, a single launch Mars cargo vehicle capable of delivering approximately 50 t of surface payload is possible. Three 50 klbf NTR engines and the two standardized LH2 tank sizes developed for lunar and Mars cargo vehicle applications would be used to configure the Mars piloted vehicle for a mission as early as 2010. The paper describes the features of the 'common' NTR-based moon/Mars STS, examines performance sensitivities resulting from different 'mission mode' assumptions, and quantifies potential schedule and cost benefits resulting from this modular moon/Mars NTR vehicle approach.
Computational System For Rapid CFD Analysis In Engineering
NASA Technical Reports Server (NTRS)
Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.
1995-01-01
Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2016-01-01
This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40k (CMAPSS40k) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Connolly, Joseph W.
2015-01-01
This paper discusses the design and application of model-based engine control (MBEC) for use during emergency operation of the aircraft. The MBEC methodology is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (CMAPSS40,000) and features an optimal tuner Kalman Filter (OTKF) to estimate unmeasured engine parameters, which can then be used for control. During an emergency scenario, normally-conservative engine operating limits may be relaxed to increase the performance of the engine and overall survivability of the aircraft; this comes at the cost of additional risk of an engine failure. The MBEC architecture offers the advantage of estimating key engine parameters that are not directly measureable. Estimating the unknown parameters allows for tighter control over these parameters, and on the level of risk the engine will operate at. This will allow the engine to achieve better performance than possible when operating to more conservative limits on a related, measurable parameter.
Steam engine research for solar parabolic dish
NASA Technical Reports Server (NTRS)
Demler, R. L.
1981-01-01
The parabolic dish solar concentrator provides an opportunity to generate high grade energy in a modular system. Most of the capital is projected to be in the dish and its installation. Assurance of a high production demand of a standard dish could lead to dramatic cost reductions. High production volume in turn depends upon maximum application flexibility by providing energy output options, e.g., heat, electricity, chemicals and combinations thereof. Subsets of these options include energy storage and combustion assist. A steam engine design and experimental program is described which investigate the efficiency potential of a small 25 kW compound reheat cycle piston engine. An engine efficiency of 35 percent is estimated for a 700 C steam temperature from the solar receiver.
Implications of multiplane-multispeed balancing for future turbine engine design and cost
NASA Technical Reports Server (NTRS)
Badgley, R. H.
1974-01-01
This paper describes several alternative approaches, provided by multiplane-multispeed balancing, to traditional gas turbine engine manufacture and assembly procedures. These alternatives, which range from addition of trim-balancing at the end of the traditional assembly process to modular design of the rotating system for assembly and balancing external to the engine, require attention by the engine designer as an integral part of the design process. Since multiplane-multispeed balancing may be incorporated at one or more of several points during manufacture-assembly, its deliberate use is expected to provide significant cost and performance (reduced vibration) benefits. Moreover, its availability provides the designer with a firm base from which he may advance, with reasonable assurance of success, into the flexible rotor dynamic regime.
Instructor's Guide for Fluid Mechanics: A Modular Approach.
ERIC Educational Resources Information Center
Cox, John S.
This guide is designed to assist engineering teachers in developing an understanding of fluid mechanics in their students. The course is designed around a set of nine self-paced learning modules, each of which contains a discussion of the subject matter; incremental objectives; problem index, set and answers; resource materials; and a quiz with…
ERIC Educational Resources Information Center
Wahlgren, Marie; Ahlberg, Anders
2013-01-01
In Swedish higher education, quality assurance mainly focuses on course module outcomes. With this in mind we developed a qualitative method to monitor and stimulate progression of learning in two modularized engineering study programmes. A set of core professional values and skills were triangulated through interviews with students, teachers,…
A Comparative Evaluation of Computer-Managed and Instructor-Managed Instruction.
ERIC Educational Resources Information Center
Ellis, John A.
This study compares an instructor-managed (IMI) and a computer-managed (CMI) version of a modularized, individualized Navy Training course; the main difference between groups was in testing and remediation. Subjects were 240 students enrolled in the Propulsion Engineering School at Great Lakes, Illinois, who were divided into three groups: CMI…
Modular Biopower System Providing Combined Heat and Power for DoD Installations
2013-12-01
Cycle Cost evaluation using the experimental results of the 6-month field demonstration and the system’s projected cost and performance for the...34 5.6 SAMPLING RESULTS ...premises, which resulted in a significant program delay. After a short period of operation, the custom-designed engine developed mechanical
Performance of the NEXT Engineering Model Power Processing Unit
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Hopson, Mark; Todd, Philip C.; Wong, Brian
2007-01-01
The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. An engineering model (EM) power processing unit (PPU) for the NEXT project was designed and fabricated by L-3 Communications under contract with NASA Glenn Research Center (GRC). This modular PPU is capable of processing up from 0.5 to 7.0 kW of output power for the NEXT ion thruster. Its design includes many significant improvements for better performance over the state-of-the-art PPU. The most significant difference is the beam supply which is comprised of six modules and capable of very efficient operation through a wide voltage range because of innovative features like dual controls, module addressing, and a high current mode. The low voltage power supplies are based on elements of the previously validated NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) PPU. The highly modular construction of the PPU resulted in improved manufacturability, simpler scalability, and lower cost. This paper describes the design of the EM PPU and the results of the bench-top performance tests.
Globus | Informatics Technology for Cancer Research (ITCR)
Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.
Distributed Engine Control Empirical/Analytical Verification Tools
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan
2013-01-01
NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.
Distributed utility technology cost, performance, and environmental characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Y; Adelman, S
1995-06-01
Distributed Utility (DU) is an emerging concept in which modular generation and storage technologies sited near customer loads in distribution systems and specifically targeted demand-side management programs are used to supplement conventional central station generation plants to meet customer energy service needs. Research has shown that implementation of the DU concept could provide substantial benefits to utilities. This report summarizes the cost, performance, and environmental and siting characteristics of existing and emerging modular generation and storage technologies that are applicable under the DU concept. It is intended to be a practical reference guide for utility planners and engineers seeking informationmore » on DU technology options. This work was funded by the Office of Utility Technologies of the US Department of Energy.« less
Optimizing Aspect-Oriented Mechanisms for Embedded Applications
NASA Astrophysics Data System (ADS)
Hundt, Christine; Stöhr, Daniel; Glesner, Sabine
As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.
BeamDyn: A High-Fidelity Wind Turbine Blade Solver in the FAST Modular Framework: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Q.; Sprague, M.; Jonkman, J.
2015-01-01
BeamDyn, a Legendre-spectral-finite-element implementation of geometrically exact beam theory (GEBT), was developed to meet the design challenges associated with highly flexible composite wind turbine blades. In this paper, the governing equations of GEBT are reformulated into a nonlinear state-space form to support its coupling within the modular framework of the FAST wind turbine computer-aided engineering (CAE) tool. Different time integration schemes (implicit and explicit) were implemented and examined for wind turbine analysis. Numerical examples are presented to demonstrate the capability of this new beam solver. An example analysis of a realistic wind turbine blade, the CX-100, is also presented asmore » validation.« less
Multi-level meta-workflows: new concept for regularly occurring tasks in quantum chemistry.
Arshad, Junaid; Hoffmann, Alexander; Gesing, Sandra; Grunzke, Richard; Krüger, Jens; Kiss, Tamas; Herres-Pawlis, Sonja; Terstyanszky, Gabor
2016-01-01
In Quantum Chemistry, many tasks are reoccurring frequently, e.g. geometry optimizations, benchmarking series etc. Here, workflows can help to reduce the time of manual job definition and output extraction. These workflows are executed on computing infrastructures and may require large computing and data resources. Scientific workflows hide these infrastructures and the resources needed to run them. It requires significant efforts and specific expertise to design, implement and test these workflows. Many of these workflows are complex and monolithic entities that can be used for particular scientific experiments. Hence, their modification is not straightforward and it makes almost impossible to share them. To address these issues we propose developing atomic workflows and embedding them in meta-workflows. Atomic workflows deliver a well-defined research domain specific function. Publishing workflows in repositories enables workflow sharing inside and/or among scientific communities. We formally specify atomic and meta-workflows in order to define data structures to be used in repositories for uploading and sharing them. Additionally, we present a formal description focused at orchestration of atomic workflows into meta-workflows. We investigated the operations that represent basic functionalities in Quantum Chemistry, developed the relevant atomic workflows and combined them into meta-workflows. Having these workflows we defined the structure of the Quantum Chemistry workflow library and uploaded these workflows in the SHIWA Workflow Repository.Graphical AbstractMeta-workflows and embedded workflows in the template representation.
Analysis of In-Space Assembly of Modular Systems
NASA Technical Reports Server (NTRS)
Moses, Robert W.; VanLaak, James; Johnson, Spencer L.; Chytka, Trina M.; Reeves, John D.; Todd, B. Keith; Moe, Rud V.; Stambolian, Damon B.
2005-01-01
Early system-level life cycle assessments facilitate cost effective optimization of system architectures to enable implementation of both modularity and in-space assembly, two key Exploration Systems Research & Technology (ESR&T) Strategic Challenges. Experiences with the International Space Station (ISS) demonstrate that the absence of this rigorous analysis can result in increased cost and operational risk. An effort is underway, called Analysis of In-Space Assembly of Modular Systems, to produce an innovative analytical methodology, including an evolved analysis toolset and proven processes in a collaborative engineering environment, to support the design and evaluation of proposed concepts. The unique aspect of this work is that it will produce the toolset, techniques and initial products to analyze and compare the detailed, life cycle costs and performance of different implementations of modularity for in-space assembly. A multi-Center team consisting of experienced personnel from the Langley Research Center, Johnson Space Center, Kennedy Space Center, and the Goddard Space Flight Center has been formed to bring their resources and experience to this development. At the end of this 30-month effort, the toolset will be ready to support the Exploration Program with an integrated assessment strategy that embodies all life-cycle aspects of the mission from design and manufacturing through operations to enable early and timely selection of an optimum solution among many competing alternatives. Already there are many different designs for crewed missions to the Moon that present competing views of modularity requiring some in-space assembly. The purpose of this paper is to highlight the approach for scoring competing designs.
Sensor Data Qualification Technique Applied to Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Simon, Donald L.
2013-01-01
This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.
Reprogramming cellular functions with engineered membrane proteins.
Arber, Caroline; Young, Melvin; Barth, Patrick
2017-10-01
Taking inspiration from Nature, synthetic biology utilizes and modifies biological components to expand the range of biological functions for engineering new practical devices and therapeutics. While early breakthroughs mainly concerned the design of gene circuits, recent efforts have focused on engineering signaling pathways to reprogram cellular functions. Since signal transduction across cell membranes initiates and controls intracellular signaling, membrane receptors have been targeted by diverse protein engineering approaches despite limited mechanistic understanding of their function. The modular architecture of several receptor families has enabled the empirical construction of chimeric receptors combining domains from distinct native receptors which have found successful immunotherapeutic applications. Meanwhile, progress in membrane protein structure determination, computational modeling and rational design promise to foster the engineering of a broader range of membrane receptor functions. Marrying empirical and rational membrane protein engineering approaches should enable the reprogramming of cells with widely diverse fine-tuned functions. Copyright © 2017 Elsevier Ltd. All rights reserved.
MTpy - Python Tools for Magnetotelluric Data Processing and Analysis
NASA Astrophysics Data System (ADS)
Krieger, Lars; Peacock, Jared; Thiel, Stephan; Inverarity, Kent; Kirkby, Alison; Robertson, Kate; Soeffky, Paul; Didana, Yohannes
2014-05-01
We present the Python package MTpy, which provides functions for the processing, analysis, and handling of magnetotelluric (MT) data sets. MT is a relatively immature and not widely applied geophysical method in comparison to other geophysical techniques such as seismology. As a result, the data processing within the academic MT community is not thoroughly standardised and is often based on a loose collection of software, adapted to the respective local specifications. We have developed MTpy to overcome problems that arise from missing standards, and to provide a simplification of the general handling of MT data. MTpy is written in Python, and the open-source code is freely available from a GitHub repository. The setup follows the modular approach of successful geoscience software packages such as GMT or Obspy. It contains sub-packages and modules for the various tasks within the standard work-flow of MT data processing and interpretation. In order to allow the inclusion of already existing and well established software, MTpy does not only provide pure Python classes and functions, but also wrapping command-line scripts to run standalone tools, e.g. modelling and inversion codes. Our aim is to provide a flexible framework, which is open for future dynamic extensions. MTpy has the potential to promote the standardisation of processing procedures and at same time be a versatile supplement for existing algorithms. Here, we introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing, interpretation, and visualisation utilising MTpy on example data sets collected over different regions of Australia and the USA.
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation
2013-01-01
The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.
Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid
2013-08-09
: The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.
MSNoise: Not Only dv/v! A Framework for Continuous Seismic Data Analysis
NASA Astrophysics Data System (ADS)
Mordret, A.; Lecocq, T.; De Plaen, R.; Caudron, C.; Brenguier, F.
2015-12-01
MSNoise is an Open and Free Python package known to be the only complete integrated workflow designed to analyse ambient seismic noise and study relative velocity changes (dv/v) in the crust. It is based on state of the art and well maintained Python modules, among which ObsPy plays an important role. To our knowledge, it is officially used for continuous monitoring at least in three notable places: the Observatory of the Piton de la Fournaise volcano (OVPF, France), the Auckland Volcanic Field (New Zealand) and on the South Napa earthquake (Berkeley, USA). It is also used by many researchers to process archive data, e.g. focussing on fault zones, intraplate Europe, geothermal exploitations or Antarctica. We first present the general working of MSNoise, originally written in 2010 to automatically scan data archives and process seismic data in order to produce dv/v time series. We demonstrate that its modularity provides a new potential to easily test new algorithms for each processing step. For example, to experiment new methods of cross-correlation (done by default in the frequency domain), stacking (default is linear stacking, averaging), or dt/t or dv/v estimation (default is moving window cross-spectrum "MWCS", so-called "doublet"), etc. Finally, we present the last major evolution of MSNoise, from a "single workflow: data archive to dv/v" to a framework system that allows plugins and modules to be developed and integrated into the MSNoise ecosystem. Examples of plugins in development such as continuous PPSD (à la McNamarra & Buland) or continuous RSAM/SSAM (Endo & Murray, Stephens) will be presented.
Towards a Unified Architecture for Data-Intensive Seismology in VERCE
NASA Astrophysics Data System (ADS)
Klampanos, I.; Spinuso, A.; Trani, L.; Krause, A.; Garcia, C. R.; Atkinson, M.
2013-12-01
Modern seismology involves managing, storing and processing large datasets, typically geographically distributed across organisations. Performing computational experiments using these data generates more data, which in turn have to be managed, further analysed and frequently be made available within or outside the scientific community. As part of the EU-funded project VERCE (http://verce.eu), we research and develop a number of use-cases, interfacing technologies to satisfy the data-intensive requirements of modern seismology. Our solution seeks to support: (1) familiar programming environments to develop and execute experiments, in particular via Python/ObsPy, (2) a unified view of heterogeneous computing resources, public or private, through the adoption of workflows, (3) monitoring the experiments and validating the data products at varying granularities, via a comprehensive provenance system, (4) reproducibility of experiments and consistency in collaboration, via a shared registry of processing units and contextual metadata (computing resources, data, etc.) Here, we provide a brief account of these components and their roles in the proposed architecture. Our design integrates heterogeneous distributed systems, while allowing researchers to retain current practices and control data handling and execution via higher-level abstractions. At the core of our solution lies the workflow language Dispel. While Dispel can be used to express workflows at fine detail, it may also be used as part of meta- or job-submission workflows. User interaction can be provided through a visual editor or through custom applications on top of parameterisable workflows, which is the approach VERCE follows. According to our design, the scientist may use versions of Dispel/workflow processing elements offered by the VERCE library or override them introducing custom scientific code, using ObsPy. This approach has the advantage that, while the scientist uses a familiar tool, the resulting workflow can be executed on a number of underlying stream-processing engines, such as STORM or OGSA-DAI, transparently. While making efficient use of arbitrarily distributed resources and large data-sets is of priority, such processing requires adequate provenance tracking and monitoring. Hiding computation and orchestration details via a workflow system, allows us to embed provenance harvesting where appropriate without impeding the user's regular working patterns. Our provenance model is based on the W3C PROV standard and can provide information of varying granularity regarding execution, systems and data consumption/production. A video demonstrating a prototype provenance exploration tool can be found at http://bit.ly/15t0Fz0. Keeping experimental methodology and results open and accessible, as well as encouraging reproducibility and collaboration, is of central importance to modern science. As our users are expected to be based at different geographical locations, to have access to different computing resources and to employ customised scientific codes, the use of a shared registry of workflow components, implementations, data and computing resources is critical.
ESO Reflex: a graphical workflow engine for data reduction
NASA Astrophysics Data System (ADS)
Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo
ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.
Space Station redesign option A: Modular buildup concept
NASA Technical Reports Server (NTRS)
1993-01-01
In early 1993, President Clinton mandated that NASA look at lower cost alternatives to Space Station Freedom. He also established an independent advisory committee - the Blue Ribbon Panel - to review the redesign work and evaluate alternatives. Daniel Goldin, NASA Administrator, established a Station Redesign Team that began operating in late March from Crystal City, Virginia. NASA intercenter teams - one each at Marshall Space Flight Center, Johnson Space Center, and Langley Research Center provided engineering and other support. The results of the Option A study done at Marshall Space Flight Center are summarized. Two configurations (A-1 and A-2) are covered. Additional data is provided in the briefing package MSFC SRT-001, Final System Review to SRT-002, Space Station Option A Modular Buildup Concept, Volumes 1-5, Revision B, June 10, 1993. In June 1993, President Clinton decided to proceed with a modular concept consistent with Option A, and asked NASA to provide an Implementation Plan by September. All data from the Option A redesign activity was provided to NASA's Transition Team for use in developing the Implementation Plan.
Modular open RF architecture: extending VICTORY to RF systems
NASA Astrophysics Data System (ADS)
Melber, Adam; Dirner, Jason; Johnson, Michael
2015-05-01
Radio frequency products spanning multiple functions have become increasingly critical to the warfighter. Military use of the electromagnetic spectrum now includes communications, electronic warfare (EW), intelligence, and mission command systems. Due to the urgent needs of counterinsurgency operations, various quick reaction capabilities (QRCs) have been fielded to enhance warfighter capability. Although these QRCs were highly successfully in their respective missions, they were designed independently resulting in significant challenges when integrated on a common platform. This paper discusses how the Modular Open RF Architecture (MORA) addresses these challenges by defining an open architecture for multifunction missions that decomposes monolithic radio systems into high-level components with welldefined functions and interfaces. The functional decomposition maximizes hardware sharing while minimizing added complexity and cost due to modularization. MORA achieves significant size, weight and power (SWaP) savings by allowing hardware such as power amplifiers and antennas to be shared across systems. By separating signal conditioning from the processing that implements the actual radio application, MORA exposes previously inaccessible architecture points, providing system integrators with the flexibility to insert third-party capabilities to address technical challenges and emerging requirements. MORA leverages the Vehicular Integration for Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR)/EW Interoperability (VICTORY) framework. This paper concludes by discussing how MORA, VICTORY and other standards such as OpenVPX are being leveraged by the U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development, and Engineering Center (CERDEC) to define a converged architecture enabling rapid technology insertion, interoperability and reduced SWaP.
Contreras, Iván; Kiefer, Stephan; Vehi, Josep
2017-01-01
Diabetes self-management is a crucial element for all people with diabetes and those at risk for developing the disease. Diabetic patients should be empowered to increase their self-management skills in order to prevent or delay the complications of diabetes. This work presents the proposal and first development stages of a smartphone application focused on the empowerment of the patients with diabetes. The concept of this interventional tool is based on the personalization of the user experience from an adaptive and dynamic perspective. The segmentation of the population and the dynamical treatment of user profiles among the different experience levels is the main challenge of the implementation. The self-management assistant and remote treatment for diabetes aims to develop a platform to integrate a series of innovative models and tools rigorously tested and supported by the research literature in diabetes together the use of a proved engine to manage workflows for healthcare.
Generation of genome-modified Drosophila cell lines using SwAP.
Franz, Alexandra; Brunner, Erich; Basler, Konrad
2017-10-02
The ease of generating genetically modified animals and cell lines has been markedly increased by the recent development of the versatile CRISPR/Cas9 tool. However, while the isolation of isogenic cell populations is usually straightforward for mammalian cell lines, the generation of clonal Drosophila cell lines has remained a longstanding challenge, hampered by the difficulty of getting Drosophila cells to grow at low densities. Here, we describe a highly efficient workflow to generate clonal Cas9-engineered Drosophila cell lines using a combination of cell pools, limiting dilution in conditioned medium and PCR with allele-specific primers, enabling the efficient selection of a clonal cell line with a suitable mutation profile. We validate the protocol by documenting the isolation, selection and verification of eight independently Cas9-edited armadillo mutant Drosophila cell lines. Our method provides a powerful and simple workflow that improves the utility of Drosophila cells for genetic studies with CRISPR/Cas9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Hongyi; Li, Yang; Zeng, Danielle
Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less
System, apparatus and methods to implement high-speed network analyzers
Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E
2015-11-10
Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.
Simulation software: engineer processes before reengineering.
Lepley, C J
2001-01-01
People make decisions all the time using intuition. But what happens when you are asked: "Are you sure your predictions are accurate? How much will a mistake cost? What are the risks associated with this change?" Once a new process is engineered, it is difficult to analyze what would have been different if other options had been chosen. Simulating a process can help senior clinical officers solve complex patient flow problems and avoid wasted efforts. Simulation software can give you the data you need to make decisions. The author introduces concepts, methodologies, and applications of computer aided simulation to illustrate their use in making decisions to improve workflow design.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
Genetic design automation: engineering fantasy or scientific renewal?
Lux, Matthew W; Bramlett, Brian W; Ball, David A; Peccoud, Jean
2012-02-01
The aim of synthetic biology is to make genetic systems more amenable to engineering, which has naturally led to the development of computer-aided design (CAD) tools. Experimentalists still primarily rely on project-specific ad hoc workflows instead of domain-specific tools, which suggests that CAD tools are lagging behind the front line of the field. Here, we discuss the scientific hurdles that have limited the productivity gains anticipated from existing tools. We argue that the real value of efforts to develop CAD tools is the formalization of genetic design rules that determine the complex relationships between genotype and phenotype. Copyright © 2011 Elsevier Ltd. All rights reserved.
Vu, Trung N; Valkenborg, Dirk; Smets, Koen; Verwaest, Kim A; Dommisse, Roger; Lemière, Filip; Verschoren, Alain; Goethals, Bart; Laukens, Kris
2011-10-20
Nuclear magnetic resonance spectroscopy (NMR) is a powerful technique to reveal and compare quantitative metabolic profiles of biological tissues. However, chemical and physical sample variations make the analysis of the data challenging, and typically require the application of a number of preprocessing steps prior to data interpretation. For example, noise reduction, normalization, baseline correction, peak picking, spectrum alignment and statistical analysis are indispensable components in any NMR analysis pipeline. We introduce a novel suite of informatics tools for the quantitative analysis of NMR metabolomic profile data. The core of the processing cascade is a novel peak alignment algorithm, called hierarchical Cluster-based Peak Alignment (CluPA). The algorithm aligns a target spectrum to the reference spectrum in a top-down fashion by building a hierarchical cluster tree from peak lists of reference and target spectra and then dividing the spectra into smaller segments based on the most distant clusters of the tree. To reduce the computational time to estimate the spectral misalignment, the method makes use of Fast Fourier Transformation (FFT) cross-correlation. Since the method returns a high-quality alignment, we can propose a simple methodology to study the variability of the NMR spectra. For each aligned NMR data point the ratio of the between-group and within-group sum of squares (BW-ratio) is calculated to quantify the difference in variability between and within predefined groups of NMR spectra. This differential analysis is related to the calculation of the F-statistic or a one-way ANOVA, but without distributional assumptions. Statistical inference based on the BW-ratio is achieved by bootstrapping the null distribution from the experimental data. The workflow performance was evaluated using a previously published dataset. Correlation maps, spectral and grey scale plots show clear improvements in comparison to other methods, and the down-to-earth quantitative analysis works well for the CluPA-aligned spectra. The whole workflow is embedded into a modular and statistically sound framework that is implemented as an R package called "speaq" ("spectrum alignment and quantitation"), which is freely available from http://code.google.com/p/speaq/.
Cormier, Nathan; Kolisnik, Tyler; Bieda, Mark
2016-07-05
There has been an enormous expansion of use of chromatin immunoprecipitation followed by sequencing (ChIP-seq) technologies. Analysis of large-scale ChIP-seq datasets involves a complex series of steps and production of several specialized graphical outputs. A number of systems have emphasized custom development of ChIP-seq pipelines. These systems are primarily based on custom programming of a single, complex pipeline or supply libraries of modules and do not produce the full range of outputs commonly produced for ChIP-seq datasets. It is desirable to have more comprehensive pipelines, in particular ones addressing common metadata tasks, such as pathway analysis, and pipelines producing standard complex graphical outputs. It is advantageous if these are highly modular systems, available as both turnkey pipelines and individual modules, that are easily comprehensible, modifiable and extensible to allow rapid alteration in response to new analysis developments in this growing area. Furthermore, it is advantageous if these pipelines allow data provenance tracking. We present a set of 20 ChIP-seq analysis software modules implemented in the Kepler workflow system; most (18/20) were also implemented as standalone, fully functional R scripts. The set consists of four full turnkey pipelines and 16 component modules. The turnkey pipelines in Kepler allow data provenance tracking. Implementation emphasized use of common R packages and widely-used external tools (e.g., MACS for peak finding), along with custom programming. This software presents comprehensive solutions and easily repurposed code blocks for ChIP-seq analysis and pipeline creation. Tasks include mapping raw reads, peakfinding via MACS, summary statistics, peak location statistics, summary plots centered on the transcription start site (TSS), gene ontology, pathway analysis, and de novo motif finding, among others. These pipelines range from those performing a single task to those performing full analyses of ChIP-seq data. The pipelines are supplied as both Kepler workflows, which allow data provenance tracking, and, in the majority of cases, as standalone R scripts. These pipelines are designed for ease of modification and repurposing.
NASA Astrophysics Data System (ADS)
Gauvin St-Denis, B.; Landry, T.; Huard, D. B.; Byrns, D.; Chaumont, D.; Foucher, S.
2017-12-01
As the number of scientific studies and policy decisions requiring tailored climate information continues to increase, the demand for support from climate service centers to provide the latest information in the format most helpful for the end-user is also on the rise. Ouranos, being one such organization based in Montreal, has partnered with the Centre de recherche informatique de Montreal (CRIM) to develop a platform that will offer climate data products that have been identified as most useful for users through years of consultation. The platform is built as modular components that target the various requirements of climate data analysis. The data components host and catalog NetCDF data as well as geographical and political delimitations. The analysis components are made available as atomic operations through Web Processing Service (WPS) or as workflows, whereby the operations are chained through a simple JSON structure and executed on a distributed network of computing resources. The visualization components range from Web Map Service (WMS) to a complete frontend for searching the data, launching workflows and interacting with maps of the results. Each component can easily be deployed and executed as an independent service through the use of Docker technology and a proxy is available to regulate user workspaces and access permissions. PAVICS includes various components from birdhouse, a collection of WPS initially developed by the German Climate Research Center (DKRZ) and Institut Pierre Simon Laplace (IPSL) and is designed to be highly interoperable with other WPS as well as many Open Geospatial Consortium (OGC) standards. Further connectivity is made with the Earth System Grid Federation (ESGF) nodes and local results are made searchable using the same API terminology. Other projects conducted by CRIM that integrate with PAVICS include the OGC Testbed 13 Innovation Program (IP) initiative that will enhance advanced cloud capabilities, application packaging deployment processes, as well as enabling Earth Observation (EO) processes relevant to climate. As part of its experimental agenda, working implementations of scalable machine learning on big climate data with Spark and SciSpark were delivered.
Concept Development Modular Hybrid Pier (MHP)
2000-02-01
rated FRP composite bridge or bridge deck is commercially available from Creative Pultrusions, Kansas Structural Systems, Martin - Marietta , Hardcore...NAVAL FACILITIES ENGINEERING SERVICE CENTER Port Hueneme, California 93043-4370 Contract Report CR 00-001-SHR FINAL REPORT PHASE 1 - CONCEPT...20000301 043 Approved for public release; distribution is unlimited. DTIC QUALITY IMWSOfBD 4 ^^ Printed on recycled paper REPORT DOCUMENTATION PAGE
ERIC Educational Resources Information Center
Maseda, F. J.; Martija, I.; Martija, I.
2012-01-01
This paper describes a novel Electrical Machine and Power Electronic Training Tool (EM&PE[subscript TT]), a methodology for using it, and associated experimental educational activities. The training tool is implemented by recreating a whole power electronics system, divided into modular blocks. This process is similar to that applied when…
2011-11-17
Mr. Frank Salvatore, High Performance Technologies FIXED AND ROTARY WING AIRCRAFT 13274 - “CREATE-AV DaVinci : Model-Based Engineering for Systems... Tools for Reliability Improvement and Addressing Modularity Issues in Evaluation and Physical Testing”, Dr. Richard Heine, Army Materiel Systems
2011-06-01
are provided as needed: − RCP requesting – in favour of mobile patrols, due to engineer reconnaissance in areas with higher risk of IED occurrence... hospitals , EOD and other military specialists gradually operated. PRT established by the Czech Republic within the ISAF operation in the province of Logar
On-Line Analysis of Southern FIA Data
Michael P. Spinney; Paul C. Van Deusen; Francis A. Roesch
2006-01-01
The Southern On-Line Estimator (SOLE) is a web-based FIA database analysis tool designed with an emphasis on modularity. The Java-based user interface is simple and intuitive to use and the R-based analysis engine is fast and stable. Each component of the program (data retrieval, statistical analysis and output) can be individually modified to accommodate major...
Design Description for Team-Based Execution of Autonomous Missions (TEAM), Spiral 1
2008-11-18
TEAM), Spiral 1 Doc. #: Version: 1.0 Date: November 18, 2008 Page 12 of 39 Visualization Framework (WorldWind) Hibernate / Hibernate ...Spatial hibernate -properties XML Mapping WCS WFSWMS Enterprise Service Bus (Mule) Messaging, Data Transformation, Intelligent Routing Workflow Engine...government selected solutions. Neither these nor Mule® are deliverable, but the government may opt to use them if it so chooses. jBPM, java Business
2017-01-31
mapping critical business workflows and then optimizing them with appropriate evolutionary technology choices is often called “ Product Line Architecture... technologies , products , services, and processes, and the USG evaluates them against its 360o requirements objectives, and refines them as appropriate, clarity...in rapidly evolving technological domains (e.g. by applying best commercial practices for open standard product line architecture.) An MP might be
Engineering and Application of Zinc Finger Proteins and TALEs for Biomedical Research.
Kim, Moon-Soo; Kini, Anu Ganesh
2017-08-01
Engineered DNA-binding domains provide a powerful technology for numerous biomedical studies due to their ability to recognize specific DNA sequences. Zinc fingers (ZF) are one of the most common DNA-binding domains and have been extensively studied for a variety of applications, such as gene regulation, genome engineering and diagnostics. Another novel DNA-binding domain known as a transcriptional activator-like effector (TALE) has been more recently discovered, which has a previously undescribed DNA-binding mode. Due to their modular architecture and flexibility, TALEs have been rapidly developed into artificial gene targeting reagents. Here, we describe the methods used to design these DNA-binding proteins and their key applications in biomedical research.
The Rolls Royce Allison RB580 turbofan - Matching the market requirement for regional transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadler, J.H.R.; Peacock, N.J.; Snyder, L.
1989-01-01
The RB580 high bypass turbofan engine has a thrust growth capability to 10,000 lb and has been optimized for efficient operation in regional markets involving 50-70 seat airliners with higher-than-turboprop cruise speeds. The two-spool engine configuration achieves an overall pressure ratio of 24 and features a single-stage wide-chord fan for high efficiency/low noise operation. The highly modular design of the configuration facilitates maintenance and repair; a dual-redundant full-authority digital electronic control system is incorporated. An SFC reduction of the order of 10 percent at cruise thrust is achieved, relative to current engines of comparable thrust class.